00:00:00.001 Started by upstream project "autotest-per-patch" build number 126102 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.115 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.154 Using shallow fetch with depth 1 00:00:00.154 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.154 > git --version # timeout=10 00:00:00.189 > git --version # 'git version 2.39.2' 00:00:00.189 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.220 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.220 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.667 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.678 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.689 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:04.689 > git config core.sparsecheckout # timeout=10 00:00:04.699 > git read-tree -mu HEAD # timeout=10 00:00:04.714 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:04.732 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:04.733 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:04.840 [Pipeline] Start of Pipeline 00:00:04.851 [Pipeline] library 00:00:04.852 Loading library shm_lib@master 00:00:04.852 Library shm_lib@master is cached. Copying from home. 00:00:04.866 [Pipeline] node 00:00:04.875 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:00:04.877 [Pipeline] { 00:00:04.884 [Pipeline] catchError 00:00:04.885 [Pipeline] { 00:00:04.894 [Pipeline] wrap 00:00:04.904 [Pipeline] { 00:00:04.911 [Pipeline] stage 00:00:04.913 [Pipeline] { (Prologue) 00:00:04.931 [Pipeline] echo 00:00:04.932 Node: VM-host-SM16 00:00:04.937 [Pipeline] cleanWs 00:00:04.944 [WS-CLEANUP] Deleting project workspace... 00:00:04.944 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.948 [WS-CLEANUP] done 00:00:05.098 [Pipeline] setCustomBuildProperty 00:00:05.156 [Pipeline] httpRequest 00:00:05.170 [Pipeline] echo 00:00:05.171 Sorcerer 10.211.164.101 is alive 00:00:05.177 [Pipeline] httpRequest 00:00:05.180 HttpMethod: GET 00:00:05.181 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:05.181 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:05.191 Response Code: HTTP/1.1 200 OK 00:00:05.192 Success: Status code 200 is in the accepted range: 200,404 00:00:05.192 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:07.815 [Pipeline] sh 00:00:08.090 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:08.106 [Pipeline] httpRequest 00:00:08.127 [Pipeline] echo 00:00:08.129 Sorcerer 10.211.164.101 is alive 00:00:08.135 [Pipeline] httpRequest 00:00:08.139 HttpMethod: GET 00:00:08.139 URL: http://10.211.164.101/packages/spdk_b3936a1443c9ac9c12a0d797d932e389ce7a5c85.tar.gz 00:00:08.139 Sending request to url: http://10.211.164.101/packages/spdk_b3936a1443c9ac9c12a0d797d932e389ce7a5c85.tar.gz 00:00:08.205 Response Code: HTTP/1.1 200 OK 00:00:08.206 Success: Status code 200 is in the accepted range: 200,404 00:00:08.206 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk_b3936a1443c9ac9c12a0d797d932e389ce7a5c85.tar.gz 00:01:03.587 [Pipeline] sh 00:01:03.865 + tar --no-same-owner -xf spdk_b3936a1443c9ac9c12a0d797d932e389ce7a5c85.tar.gz 00:01:07.163 [Pipeline] sh 00:01:07.441 + git -C spdk log --oneline -n5 00:01:07.441 b3936a144 accel: introduce tasks in sequence limit 00:01:07.441 719d03c6a sock/uring: only register net impl if supported 00:01:07.441 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:07.441 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:07.441 6c7c1f57e accel: add sequence outstanding stat 00:01:07.461 [Pipeline] writeFile 00:01:07.478 [Pipeline] sh 00:01:07.755 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:07.766 [Pipeline] sh 00:01:08.043 + cat autorun-spdk.conf 00:01:08.043 SPDK_TEST_UNITTEST=1 00:01:08.043 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.043 SPDK_TEST_NVME=1 00:01:08.044 SPDK_TEST_BLOCKDEV=1 00:01:08.044 SPDK_RUN_ASAN=1 00:01:08.044 SPDK_RUN_UBSAN=1 00:01:08.044 SPDK_TEST_RAID5=1 00:01:08.044 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.049 RUN_NIGHTLY=0 00:01:08.051 [Pipeline] } 00:01:08.069 [Pipeline] // stage 00:01:08.085 [Pipeline] stage 00:01:08.087 [Pipeline] { (Run VM) 00:01:08.102 [Pipeline] sh 00:01:08.385 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:08.385 + echo 'Start stage prepare_nvme.sh' 00:01:08.385 Start stage prepare_nvme.sh 00:01:08.385 + [[ -n 1 ]] 00:01:08.385 + disk_prefix=ex1 00:01:08.385 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest_2 ]] 00:01:08.385 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf ]] 00:01:08.385 + source /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf 00:01:08.385 ++ SPDK_TEST_UNITTEST=1 00:01:08.385 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.385 ++ SPDK_TEST_NVME=1 00:01:08.385 ++ SPDK_TEST_BLOCKDEV=1 00:01:08.385 ++ SPDK_RUN_ASAN=1 00:01:08.385 ++ SPDK_RUN_UBSAN=1 00:01:08.385 ++ SPDK_TEST_RAID5=1 00:01:08.385 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.385 ++ RUN_NIGHTLY=0 00:01:08.385 + cd /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:08.385 + nvme_files=() 00:01:08.385 + declare -A nvme_files 00:01:08.385 + backend_dir=/var/lib/libvirt/images/backends 00:01:08.385 + nvme_files['nvme.img']=5G 00:01:08.385 + nvme_files['nvme-cmb.img']=5G 00:01:08.385 + nvme_files['nvme-multi0.img']=4G 00:01:08.385 + nvme_files['nvme-multi1.img']=4G 00:01:08.385 + nvme_files['nvme-multi2.img']=4G 00:01:08.385 + nvme_files['nvme-openstack.img']=8G 00:01:08.385 + nvme_files['nvme-zns.img']=5G 00:01:08.386 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:08.386 + (( SPDK_TEST_FTL == 1 )) 00:01:08.386 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:08.386 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:08.386 + for nvme in "${!nvme_files[@]}" 00:01:08.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:08.386 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.386 + for nvme in "${!nvme_files[@]}" 00:01:08.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:08.386 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.386 + for nvme in "${!nvme_files[@]}" 00:01:08.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:08.386 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:08.386 + for nvme in "${!nvme_files[@]}" 00:01:08.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:08.386 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.386 + for nvme in "${!nvme_files[@]}" 00:01:08.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:08.386 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.386 + for nvme in "${!nvme_files[@]}" 00:01:08.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:08.386 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.386 + for nvme in "${!nvme_files[@]}" 00:01:08.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:09.319 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.319 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:09.319 + echo 'End stage prepare_nvme.sh' 00:01:09.319 End stage prepare_nvme.sh 00:01:09.330 [Pipeline] sh 00:01:09.608 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:09.608 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f ubuntu2004 00:01:09.608 00:01:09.608 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant 00:01:09.608 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk 00:01:09.608 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:09.608 HELP=0 00:01:09.608 DRY_RUN=0 00:01:09.608 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img, 00:01:09.608 NVME_DISKS_TYPE=nvme, 00:01:09.608 NVME_AUTO_CREATE=0 00:01:09.608 NVME_DISKS_NAMESPACES=, 00:01:09.608 NVME_CMB=, 00:01:09.608 NVME_PMR=, 00:01:09.608 NVME_ZNS=, 00:01:09.608 NVME_MS=, 00:01:09.608 NVME_FDP=, 00:01:09.608 SPDK_VAGRANT_DISTRO=ubuntu2004 00:01:09.608 SPDK_VAGRANT_VMCPU=10 00:01:09.608 SPDK_VAGRANT_VMRAM=12288 00:01:09.608 SPDK_VAGRANT_PROVIDER=libvirt 00:01:09.608 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:09.608 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:09.608 SPDK_OPENSTACK_NETWORK=0 00:01:09.608 VAGRANT_PACKAGE_BOX=0 00:01:09.608 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:09.608 FORCE_DISTRO=true 00:01:09.608 VAGRANT_BOX_VERSION= 00:01:09.608 EXTRA_VAGRANTFILES= 00:01:09.608 NIC_MODEL=e1000 00:01:09.608 00:01:09.608 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt' 00:01:09.608 /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest_2 00:01:12.891 Bringing machine 'default' up with 'libvirt' provider... 00:01:13.825 ==> default: Creating image (snapshot of base box volume). 00:01:13.825 ==> default: Creating domain with the following settings... 00:01:13.825 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1720772868_5000ea6a9c9b9ba58b43 00:01:13.825 ==> default: -- Domain type: kvm 00:01:13.825 ==> default: -- Cpus: 10 00:01:13.825 ==> default: -- Feature: acpi 00:01:13.825 ==> default: -- Feature: apic 00:01:13.825 ==> default: -- Feature: pae 00:01:13.825 ==> default: -- Memory: 12288M 00:01:13.825 ==> default: -- Memory Backing: hugepages: 00:01:13.825 ==> default: -- Management MAC: 00:01:13.825 ==> default: -- Loader: 00:01:13.825 ==> default: -- Nvram: 00:01:13.825 ==> default: -- Base box: spdk/ubuntu2004 00:01:13.825 ==> default: -- Storage pool: default 00:01:13.825 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1720772868_5000ea6a9c9b9ba58b43.img (20G) 00:01:13.825 ==> default: -- Volume Cache: default 00:01:13.825 ==> default: -- Kernel: 00:01:13.825 ==> default: -- Initrd: 00:01:13.825 ==> default: -- Graphics Type: vnc 00:01:13.825 ==> default: -- Graphics Port: -1 00:01:13.825 ==> default: -- Graphics IP: 127.0.0.1 00:01:13.825 ==> default: -- Graphics Password: Not defined 00:01:13.825 ==> default: -- Video Type: cirrus 00:01:13.825 ==> default: -- Video VRAM: 9216 00:01:13.825 ==> default: -- Sound Type: 00:01:13.825 ==> default: -- Keymap: en-us 00:01:13.825 ==> default: -- TPM Path: 00:01:13.825 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:13.825 ==> default: -- Command line args: 00:01:13.825 ==> default: -> value=-device, 00:01:13.825 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:13.825 ==> default: -> value=-drive, 00:01:13.825 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:13.825 ==> default: -> value=-device, 00:01:13.825 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.083 ==> default: Creating shared folders metadata... 00:01:14.083 ==> default: Starting domain. 00:01:16.083 ==> default: Waiting for domain to get an IP address... 00:01:26.079 ==> default: Waiting for SSH to become available... 00:01:27.040 ==> default: Configuring and enabling network interfaces... 00:01:29.583 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:34.868 ==> default: Mounting SSHFS shared folder... 00:01:35.127 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:35.127 ==> default: Checking Mount.. 00:01:37.654 ==> default: Checking Mount.. 00:01:37.912 ==> default: Folder Successfully Mounted! 00:01:37.912 ==> default: Running provisioner: file... 00:01:38.171 default: ~/.gitconfig => .gitconfig 00:01:38.171 00:01:38.171 SUCCESS! 00:01:38.171 00:01:38.171 cd to /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:38.171 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:38.171 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:38.171 00:01:38.180 [Pipeline] } 00:01:38.198 [Pipeline] // stage 00:01:38.206 [Pipeline] dir 00:01:38.207 Running in /var/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt 00:01:38.208 [Pipeline] { 00:01:38.222 [Pipeline] catchError 00:01:38.224 [Pipeline] { 00:01:38.238 [Pipeline] sh 00:01:38.517 + vagrant ssh-config --host vagrant 00:01:38.517 + sed -ne /^Host/,$p 00:01:38.517 + tee ssh_conf 00:01:42.707 Host vagrant 00:01:42.707 HostName 192.168.121.187 00:01:42.707 User vagrant 00:01:42.707 Port 22 00:01:42.707 UserKnownHostsFile /dev/null 00:01:42.707 StrictHostKeyChecking no 00:01:42.707 PasswordAuthentication no 00:01:42.707 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:42.707 IdentitiesOnly yes 00:01:42.707 LogLevel FATAL 00:01:42.707 ForwardAgent yes 00:01:42.707 ForwardX11 yes 00:01:42.707 00:01:42.722 [Pipeline] withEnv 00:01:42.725 [Pipeline] { 00:01:42.743 [Pipeline] sh 00:01:43.066 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:43.066 source /etc/os-release 00:01:43.066 [[ -e /image.version ]] && img=$(< /image.version) 00:01:43.066 # Minimal, systemd-like check. 00:01:43.066 if [[ -e /.dockerenv ]]; then 00:01:43.066 # Clear garbage from the node's name: 00:01:43.066 # agt-er_autotest_547-896 -> autotest_547-896 00:01:43.066 # $HOSTNAME is the actual container id 00:01:43.066 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:43.066 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:43.066 # We can assume this is a mount from a host where container is running, 00:01:43.066 # so fetch its hostname to easily identify the target swarm worker. 00:01:43.066 container="$(< /etc/hostname) ($agent)" 00:01:43.066 else 00:01:43.066 # Fallback 00:01:43.066 container=$agent 00:01:43.066 fi 00:01:43.066 fi 00:01:43.066 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:43.066 00:01:43.653 [Pipeline] } 00:01:43.674 [Pipeline] // withEnv 00:01:43.683 [Pipeline] setCustomBuildProperty 00:01:43.699 [Pipeline] stage 00:01:43.702 [Pipeline] { (Tests) 00:01:43.721 [Pipeline] sh 00:01:44.003 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:44.586 [Pipeline] sh 00:01:44.869 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:45.452 [Pipeline] timeout 00:01:45.452 Timeout set to expire in 1 hr 30 min 00:01:45.454 [Pipeline] { 00:01:45.473 [Pipeline] sh 00:01:45.752 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:46.688 HEAD is now at b3936a144 accel: introduce tasks in sequence limit 00:01:46.702 [Pipeline] sh 00:01:46.982 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:47.549 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:47.568 [Pipeline] sh 00:01:47.851 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:48.431 [Pipeline] sh 00:01:48.710 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:49.314 ++ readlink -f spdk_repo 00:01:49.314 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:49.314 + [[ -n /home/vagrant/spdk_repo ]] 00:01:49.314 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:49.314 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:49.314 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:49.314 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:49.314 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:49.314 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:49.314 + cd /home/vagrant/spdk_repo 00:01:49.314 + source /etc/os-release 00:01:49.314 ++ NAME=Ubuntu 00:01:49.314 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:49.314 ++ ID=ubuntu 00:01:49.314 ++ ID_LIKE=debian 00:01:49.314 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:49.314 ++ VERSION_ID=20.04 00:01:49.314 ++ HOME_URL=https://www.ubuntu.com/ 00:01:49.314 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:49.314 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:49.314 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:49.314 ++ VERSION_CODENAME=focal 00:01:49.314 ++ UBUNTU_CODENAME=focal 00:01:49.314 + uname -a 00:01:49.314 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:49.314 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:49.314 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:49.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:49.572 Hugepages 00:01:49.572 node hugesize free / total 00:01:49.572 node0 1048576kB 0 / 0 00:01:49.572 node0 2048kB 0 / 0 00:01:49.572 00:01:49.572 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.572 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:49.830 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:49.830 + rm -f /tmp/spdk-ld-path 00:01:49.830 + source autorun-spdk.conf 00:01:49.830 ++ SPDK_TEST_UNITTEST=1 00:01:49.830 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.830 ++ SPDK_TEST_NVME=1 00:01:49.830 ++ SPDK_TEST_BLOCKDEV=1 00:01:49.830 ++ SPDK_RUN_ASAN=1 00:01:49.830 ++ SPDK_RUN_UBSAN=1 00:01:49.830 ++ SPDK_TEST_RAID5=1 00:01:49.830 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.830 ++ RUN_NIGHTLY=0 00:01:49.830 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.830 + [[ -n '' ]] 00:01:49.830 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:49.830 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:49.831 + for M in /var/spdk/build-*-manifest.txt 00:01:49.831 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.831 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:49.831 + for M in /var/spdk/build-*-manifest.txt 00:01:49.831 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.831 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:49.831 ++ uname 00:01:49.831 + [[ Linux == \L\i\n\u\x ]] 00:01:49.831 + sudo dmesg -T 00:01:49.831 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:49.831 + sudo dmesg --clear 00:01:49.831 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:49.831 + dmesg_pid=2411 00:01:49.831 + [[ Ubuntu == FreeBSD ]] 00:01:49.831 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.831 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.831 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.831 + sudo dmesg -Tw 00:01:49.831 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.831 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.831 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.831 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.831 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:49.831 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:49.831 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:49.831 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.831 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:49.831 Test configuration: 00:01:49.831 SPDK_TEST_UNITTEST=1 00:01:49.831 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.831 SPDK_TEST_NVME=1 00:01:49.831 SPDK_TEST_BLOCKDEV=1 00:01:49.831 SPDK_RUN_ASAN=1 00:01:49.831 SPDK_RUN_UBSAN=1 00:01:49.831 SPDK_TEST_RAID5=1 00:01:49.831 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.831 RUN_NIGHTLY=0 08:28:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:49.831 08:28:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.831 08:28:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.831 08:28:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.831 08:28:23 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:49.831 08:28:23 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:49.831 08:28:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:49.831 08:28:23 -- paths/export.sh@5 -- $ export PATH 00:01:49.831 08:28:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:49.831 08:28:23 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:49.831 08:28:23 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:49.831 08:28:23 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720772903.XXXXXX 00:01:49.831 08:28:23 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720772903.QHaiy7 00:01:49.831 08:28:23 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:49.831 08:28:23 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:49.831 08:28:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:49.831 08:28:23 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:49.831 08:28:23 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.831 08:28:23 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:49.831 08:28:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:49.831 08:28:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.831 08:28:23 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:49.831 08:28:23 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:49.831 08:28:23 -- pm/common@17 -- $ local monitor 00:01:49.831 08:28:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.831 08:28:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.831 08:28:23 -- pm/common@25 -- $ sleep 1 00:01:49.831 08:28:23 -- pm/common@21 -- $ date +%s 00:01:49.831 08:28:23 -- pm/common@21 -- $ date +%s 00:01:49.831 08:28:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720772903 00:01:49.831 08:28:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720772903 00:01:49.831 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720772903_collect-vmstat.pm.log 00:01:49.831 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720772903_collect-cpu-load.pm.log 00:01:51.207 08:28:24 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:51.207 08:28:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:51.207 08:28:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:51.207 08:28:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:51.207 08:28:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:51.207 Fri Jul 12 08:28:24 UTC 2024 00:01:51.207 08:28:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:51.207 v24.09-pre-203-gb3936a144 00:01:51.207 08:28:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:51.207 08:28:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:51.207 08:28:24 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:51.207 08:28:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.207 08:28:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.207 ************************************ 00:01:51.207 START TEST asan 00:01:51.207 ************************************ 00:01:51.207 using asan 00:01:51.207 08:28:24 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:51.207 00:01:51.207 real 0m0.000s 00:01:51.207 user 0m0.000s 00:01:51.207 sys 0m0.000s 00:01:51.207 08:28:24 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:51.207 08:28:24 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:51.207 ************************************ 00:01:51.207 END TEST asan 00:01:51.207 ************************************ 00:01:51.207 08:28:25 -- common/autotest_common.sh@1142 -- $ return 0 00:01:51.207 08:28:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:51.207 08:28:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:51.207 08:28:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:51.207 08:28:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.207 08:28:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.207 ************************************ 00:01:51.207 START TEST ubsan 00:01:51.207 ************************************ 00:01:51.207 using ubsan 00:01:51.207 08:28:25 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:51.207 00:01:51.207 real 0m0.000s 00:01:51.207 user 0m0.000s 00:01:51.207 sys 0m0.000s 00:01:51.207 08:28:25 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:51.207 ************************************ 00:01:51.207 END TEST ubsan 00:01:51.207 ************************************ 00:01:51.207 08:28:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:51.207 08:28:25 -- common/autotest_common.sh@1142 -- $ return 0 00:01:51.207 08:28:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:51.207 08:28:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:51.207 08:28:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:51.207 08:28:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:51.207 08:28:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:51.207 08:28:25 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:51.207 08:28:25 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:51.207 08:28:25 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:01:51.207 08:28:25 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:51.207 08:28:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:51.207 08:28:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.207 ************************************ 00:01:51.207 START TEST unittest_build 00:01:51.207 ************************************ 00:01:51.207 08:28:25 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:01:51.207 08:28:25 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:51.207 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:51.207 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.466 Using 'verbs' RDMA provider 00:02:06.901 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.104 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.104 Creating mk/config.mk...done. 00:02:19.104 Creating mk/cc.flags.mk...done. 00:02:19.104 Type 'make' to build. 00:02:19.104 08:28:53 unittest_build -- common/autobuild_common.sh@412 -- $ make -j10 00:02:19.104 make[1]: Nothing to be done for 'all'. 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.633 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.634 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.634 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.892 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.892 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.892 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.892 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.892 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.892 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:21.892 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.149 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.149 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.149 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.149 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.149 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.149 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.150 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.408 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.408 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.408 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.665 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.924 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.924 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.924 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.924 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.924 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.924 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.183 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.704 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.704 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.704 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.704 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.704 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.962 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.962 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.962 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.962 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.962 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.220 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.220 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.478 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.737 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.737 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.737 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.737 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.254 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.512 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.769 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.769 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.769 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.769 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.769 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.769 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.027 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.027 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.027 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.027 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.284 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.284 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.284 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.284 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.284 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.564 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.080 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.339 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.339 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.339 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.597 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.597 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.856 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.856 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.114 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.372 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.630 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.630 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.630 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.630 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.630 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.630 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.888 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.663 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.663 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.663 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.663 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.663 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.663 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.923 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.923 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.923 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.923 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.186 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.186 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.186 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.186 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.186 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.186 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.443 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.701 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.701 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.701 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.701 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.701 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.959 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.959 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.959 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.217 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.476 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.734 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.735 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.993 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.251 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.251 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.509 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.509 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.765 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.765 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.765 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.281 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.281 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.281 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.539 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.539 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.539 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.539 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.824 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.824 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.824 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.824 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.824 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.081 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.081 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.081 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.340 The Meson build system 00:02:34.340 Version: 1.4.0 00:02:34.340 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.340 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.340 Build type: native build 00:02:34.340 Program cat found: YES (/usr/bin/cat) 00:02:34.340 Project name: DPDK 00:02:34.340 Project version: 24.03.0 00:02:34.340 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:34.340 C linker for the host machine: cc ld.bfd 2.34 00:02:34.340 Host machine cpu family: x86_64 00:02:34.340 Host machine cpu: x86_64 00:02:34.340 Message: ## Building in Developer Mode ## 00:02:34.340 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.340 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.340 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.340 Program python3 found: YES (/usr/bin/python3) 00:02:34.340 Program cat found: YES (/usr/bin/cat) 00:02:34.340 Compiler for C supports arguments -march=native: YES 00:02:34.340 Checking for size of "void *" : 8 00:02:34.340 Checking for size of "void *" : 8 (cached) 00:02:34.340 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:34.340 Library m found: YES 00:02:34.340 Library numa found: YES 00:02:34.340 Has header "numaif.h" : YES 00:02:34.340 Library fdt found: NO 00:02:34.340 Library execinfo found: NO 00:02:34.340 Has header "execinfo.h" : YES 00:02:34.340 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:34.340 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.340 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.340 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.340 Run-time dependency openssl found: YES 1.1.1f 00:02:34.340 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:34.340 Library pcap found: NO 00:02:34.340 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.340 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.340 Compiler for C supports arguments -Wformat: YES 00:02:34.340 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:34.340 Compiler for C supports arguments -Wformat-security: YES 00:02:34.340 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.340 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.340 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.340 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.340 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.340 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.340 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.340 Compiler for C supports arguments -Wundef: YES 00:02:34.340 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.340 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.340 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.340 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.340 Program objdump found: YES (/usr/bin/objdump) 00:02:34.340 Compiler for C supports arguments -mavx512f: YES 00:02:34.340 Checking if "AVX512 checking" compiles: YES 00:02:34.340 Fetching value of define "__SSE4_2__" : 1 00:02:34.340 Fetching value of define "__AES__" : 1 00:02:34.340 Fetching value of define "__AVX__" : 1 00:02:34.340 Fetching value of define "__AVX2__" : 1 00:02:34.340 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.340 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.340 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.340 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.340 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.340 Fetching value of define "__PCLMUL__" : 1 00:02:34.340 Fetching value of define "__RDRND__" : 1 00:02:34.340 Fetching value of define "__RDSEED__" : 1 00:02:34.340 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.340 Fetching value of define "__znver1__" : (undefined) 00:02:34.340 Fetching value of define "__znver2__" : (undefined) 00:02:34.340 Fetching value of define "__znver3__" : (undefined) 00:02:34.340 Fetching value of define "__znver4__" : (undefined) 00:02:34.340 Library asan found: YES 00:02:34.340 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.340 Message: lib/log: Defining dependency "log" 00:02:34.340 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.340 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.340 Library rt found: YES 00:02:34.340 Checking for function "getentropy" : NO 00:02:34.340 Message: lib/eal: Defining dependency "eal" 00:02:34.340 Message: lib/ring: Defining dependency "ring" 00:02:34.340 Message: lib/rcu: Defining dependency "rcu" 00:02:34.340 Message: lib/mempool: Defining dependency "mempool" 00:02:34.340 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.340 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.340 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.340 Compiler for C supports arguments -mpclmul: YES 00:02:34.340 Compiler for C supports arguments -maes: YES 00:02:34.340 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.340 Compiler for C supports arguments -mavx512bw: YES 00:02:34.340 Compiler for C supports arguments -mavx512dq: YES 00:02:34.340 Compiler for C supports arguments -mavx512vl: YES 00:02:34.340 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.340 Compiler for C supports arguments -mavx2: YES 00:02:34.340 Compiler for C supports arguments -mavx: YES 00:02:34.340 Message: lib/net: Defining dependency "net" 00:02:34.340 Message: lib/meter: Defining dependency "meter" 00:02:34.340 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.340 Message: lib/pci: Defining dependency "pci" 00:02:34.340 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.340 Message: lib/hash: Defining dependency "hash" 00:02:34.340 Message: lib/timer: Defining dependency "timer" 00:02:34.340 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.340 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.340 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.340 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.340 Message: lib/power: Defining dependency "power" 00:02:34.340 Message: lib/reorder: Defining dependency "reorder" 00:02:34.340 Message: lib/security: Defining dependency "security" 00:02:34.340 Has header "linux/userfaultfd.h" : YES 00:02:34.340 Has header "linux/vduse.h" : NO 00:02:34.340 Message: lib/vhost: Defining dependency "vhost" 00:02:34.340 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.340 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.340 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.340 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.340 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.340 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.340 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.340 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.340 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.340 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.340 Program doxygen found: YES (/usr/bin/doxygen) 00:02:34.340 Configuring doxy-api-html.conf using configuration 00:02:34.340 Configuring doxy-api-man.conf using configuration 00:02:34.340 Program mandb found: YES (/usr/bin/mandb) 00:02:34.340 Program sphinx-build found: NO 00:02:34.340 Configuring rte_build_config.h using configuration 00:02:34.340 Message: 00:02:34.340 ================= 00:02:34.340 Applications Enabled 00:02:34.340 ================= 00:02:34.340 00:02:34.340 apps: 00:02:34.340 00:02:34.340 00:02:34.340 Message: 00:02:34.340 ================= 00:02:34.340 Libraries Enabled 00:02:34.340 ================= 00:02:34.340 00:02:34.340 libs: 00:02:34.340 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.340 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.340 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.340 00:02:34.340 Message: 00:02:34.340 =============== 00:02:34.340 Drivers Enabled 00:02:34.340 =============== 00:02:34.340 00:02:34.340 common: 00:02:34.340 00:02:34.340 bus: 00:02:34.340 pci, vdev, 00:02:34.340 mempool: 00:02:34.340 ring, 00:02:34.340 dma: 00:02:34.340 00:02:34.340 net: 00:02:34.340 00:02:34.340 crypto: 00:02:34.340 00:02:34.340 compress: 00:02:34.340 00:02:34.340 vdpa: 00:02:34.340 00:02:34.340 00:02:34.340 Message: 00:02:34.340 ================= 00:02:34.340 Content Skipped 00:02:34.340 ================= 00:02:34.340 00:02:34.340 apps: 00:02:34.340 dumpcap: explicitly disabled via build config 00:02:34.340 graph: explicitly disabled via build config 00:02:34.340 pdump: explicitly disabled via build config 00:02:34.340 proc-info: explicitly disabled via build config 00:02:34.340 test-acl: explicitly disabled via build config 00:02:34.340 test-bbdev: explicitly disabled via build config 00:02:34.340 test-cmdline: explicitly disabled via build config 00:02:34.340 test-compress-perf: explicitly disabled via build config 00:02:34.340 test-crypto-perf: explicitly disabled via build config 00:02:34.340 test-dma-perf: explicitly disabled via build config 00:02:34.340 test-eventdev: explicitly disabled via build config 00:02:34.340 test-fib: explicitly disabled via build config 00:02:34.340 test-flow-perf: explicitly disabled via build config 00:02:34.340 test-gpudev: explicitly disabled via build config 00:02:34.340 test-mldev: explicitly disabled via build config 00:02:34.340 test-pipeline: explicitly disabled via build config 00:02:34.340 test-pmd: explicitly disabled via build config 00:02:34.340 test-regex: explicitly disabled via build config 00:02:34.340 test-sad: explicitly disabled via build config 00:02:34.340 test-security-perf: explicitly disabled via build config 00:02:34.340 00:02:34.340 libs: 00:02:34.340 argparse: explicitly disabled via build config 00:02:34.340 metrics: explicitly disabled via build config 00:02:34.340 acl: explicitly disabled via build config 00:02:34.340 bbdev: explicitly disabled via build config 00:02:34.340 bitratestats: explicitly disabled via build config 00:02:34.340 bpf: explicitly disabled via build config 00:02:34.340 cfgfile: explicitly disabled via build config 00:02:34.340 distributor: explicitly disabled via build config 00:02:34.340 efd: explicitly disabled via build config 00:02:34.340 eventdev: explicitly disabled via build config 00:02:34.340 dispatcher: explicitly disabled via build config 00:02:34.340 gpudev: explicitly disabled via build config 00:02:34.340 gro: explicitly disabled via build config 00:02:34.340 gso: explicitly disabled via build config 00:02:34.340 ip_frag: explicitly disabled via build config 00:02:34.340 jobstats: explicitly disabled via build config 00:02:34.340 latencystats: explicitly disabled via build config 00:02:34.340 lpm: explicitly disabled via build config 00:02:34.340 member: explicitly disabled via build config 00:02:34.340 pcapng: explicitly disabled via build config 00:02:34.340 rawdev: explicitly disabled via build config 00:02:34.340 regexdev: explicitly disabled via build config 00:02:34.340 mldev: explicitly disabled via build config 00:02:34.340 rib: explicitly disabled via build config 00:02:34.340 sched: explicitly disabled via build config 00:02:34.340 stack: explicitly disabled via build config 00:02:34.341 ipsec: explicitly disabled via build config 00:02:34.341 pdcp: explicitly disabled via build config 00:02:34.341 fib: explicitly disabled via build config 00:02:34.341 port: explicitly disabled via build config 00:02:34.341 pdump: explicitly disabled via build config 00:02:34.341 table: explicitly disabled via build config 00:02:34.341 pipeline: explicitly disabled via build config 00:02:34.341 graph: explicitly disabled via build config 00:02:34.341 node: explicitly disabled via build config 00:02:34.341 00:02:34.341 drivers: 00:02:34.341 common/cpt: not in enabled drivers build config 00:02:34.341 common/dpaax: not in enabled drivers build config 00:02:34.341 common/iavf: not in enabled drivers build config 00:02:34.341 common/idpf: not in enabled drivers build config 00:02:34.341 common/ionic: not in enabled drivers build config 00:02:34.341 common/mvep: not in enabled drivers build config 00:02:34.341 common/octeontx: not in enabled drivers build config 00:02:34.341 bus/auxiliary: not in enabled drivers build config 00:02:34.341 bus/cdx: not in enabled drivers build config 00:02:34.341 bus/dpaa: not in enabled drivers build config 00:02:34.341 bus/fslmc: not in enabled drivers build config 00:02:34.341 bus/ifpga: not in enabled drivers build config 00:02:34.341 bus/platform: not in enabled drivers build config 00:02:34.341 bus/uacce: not in enabled drivers build config 00:02:34.341 bus/vmbus: not in enabled drivers build config 00:02:34.341 common/cnxk: not in enabled drivers build config 00:02:34.341 common/mlx5: not in enabled drivers build config 00:02:34.341 common/nfp: not in enabled drivers build config 00:02:34.341 common/nitrox: not in enabled drivers build config 00:02:34.341 common/qat: not in enabled drivers build config 00:02:34.341 common/sfc_efx: not in enabled drivers build config 00:02:34.341 mempool/bucket: not in enabled drivers build config 00:02:34.341 mempool/cnxk: not in enabled drivers build config 00:02:34.341 mempool/dpaa: not in enabled drivers build config 00:02:34.341 mempool/dpaa2: not in enabled drivers build config 00:02:34.341 mempool/octeontx: not in enabled drivers build config 00:02:34.341 mempool/stack: not in enabled drivers build config 00:02:34.341 dma/cnxk: not in enabled drivers build config 00:02:34.341 dma/dpaa: not in enabled drivers build config 00:02:34.341 dma/dpaa2: not in enabled drivers build config 00:02:34.341 dma/hisilicon: not in enabled drivers build config 00:02:34.341 dma/idxd: not in enabled drivers build config 00:02:34.341 dma/ioat: not in enabled drivers build config 00:02:34.341 dma/skeleton: not in enabled drivers build config 00:02:34.341 net/af_packet: not in enabled drivers build config 00:02:34.341 net/af_xdp: not in enabled drivers build config 00:02:34.341 net/ark: not in enabled drivers build config 00:02:34.341 net/atlantic: not in enabled drivers build config 00:02:34.341 net/avp: not in enabled drivers build config 00:02:34.341 net/axgbe: not in enabled drivers build config 00:02:34.341 net/bnx2x: not in enabled drivers build config 00:02:34.341 net/bnxt: not in enabled drivers build config 00:02:34.341 net/bonding: not in enabled drivers build config 00:02:34.341 net/cnxk: not in enabled drivers build config 00:02:34.341 net/cpfl: not in enabled drivers build config 00:02:34.341 net/cxgbe: not in enabled drivers build config 00:02:34.341 net/dpaa: not in enabled drivers build config 00:02:34.341 net/dpaa2: not in enabled drivers build config 00:02:34.341 net/e1000: not in enabled drivers build config 00:02:34.341 net/ena: not in enabled drivers build config 00:02:34.341 net/enetc: not in enabled drivers build config 00:02:34.341 net/enetfec: not in enabled drivers build config 00:02:34.341 net/enic: not in enabled drivers build config 00:02:34.341 net/failsafe: not in enabled drivers build config 00:02:34.341 net/fm10k: not in enabled drivers build config 00:02:34.341 net/gve: not in enabled drivers build config 00:02:34.341 net/hinic: not in enabled drivers build config 00:02:34.341 net/hns3: not in enabled drivers build config 00:02:34.341 net/i40e: not in enabled drivers build config 00:02:34.341 net/iavf: not in enabled drivers build config 00:02:34.341 net/ice: not in enabled drivers build config 00:02:34.341 net/idpf: not in enabled drivers build config 00:02:34.341 net/igc: not in enabled drivers build config 00:02:34.341 net/ionic: not in enabled drivers build config 00:02:34.341 net/ipn3ke: not in enabled drivers build config 00:02:34.341 net/ixgbe: not in enabled drivers build config 00:02:34.341 net/mana: not in enabled drivers build config 00:02:34.341 net/memif: not in enabled drivers build config 00:02:34.341 net/mlx4: not in enabled drivers build config 00:02:34.341 net/mlx5: not in enabled drivers build config 00:02:34.341 net/mvneta: not in enabled drivers build config 00:02:34.341 net/mvpp2: not in enabled drivers build config 00:02:34.341 net/netvsc: not in enabled drivers build config 00:02:34.341 net/nfb: not in enabled drivers build config 00:02:34.341 net/nfp: not in enabled drivers build config 00:02:34.341 net/ngbe: not in enabled drivers build config 00:02:34.341 net/null: not in enabled drivers build config 00:02:34.341 net/octeontx: not in enabled drivers build config 00:02:34.341 net/octeon_ep: not in enabled drivers build config 00:02:34.341 net/pcap: not in enabled drivers build config 00:02:34.341 net/pfe: not in enabled drivers build config 00:02:34.341 net/qede: not in enabled drivers build config 00:02:34.341 net/ring: not in enabled drivers build config 00:02:34.341 net/sfc: not in enabled drivers build config 00:02:34.341 net/softnic: not in enabled drivers build config 00:02:34.341 net/tap: not in enabled drivers build config 00:02:34.341 net/thunderx: not in enabled drivers build config 00:02:34.341 net/txgbe: not in enabled drivers build config 00:02:34.341 net/vdev_netvsc: not in enabled drivers build config 00:02:34.341 net/vhost: not in enabled drivers build config 00:02:34.341 net/virtio: not in enabled drivers build config 00:02:34.341 net/vmxnet3: not in enabled drivers build config 00:02:34.341 raw/*: missing internal dependency, "rawdev" 00:02:34.341 crypto/armv8: not in enabled drivers build config 00:02:34.341 crypto/bcmfs: not in enabled drivers build config 00:02:34.341 crypto/caam_jr: not in enabled drivers build config 00:02:34.341 crypto/ccp: not in enabled drivers build config 00:02:34.341 crypto/cnxk: not in enabled drivers build config 00:02:34.341 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.341 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.341 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.341 crypto/mlx5: not in enabled drivers build config 00:02:34.341 crypto/mvsam: not in enabled drivers build config 00:02:34.341 crypto/nitrox: not in enabled drivers build config 00:02:34.341 crypto/null: not in enabled drivers build config 00:02:34.341 crypto/octeontx: not in enabled drivers build config 00:02:34.341 crypto/openssl: not in enabled drivers build config 00:02:34.341 crypto/scheduler: not in enabled drivers build config 00:02:34.341 crypto/uadk: not in enabled drivers build config 00:02:34.341 crypto/virtio: not in enabled drivers build config 00:02:34.341 compress/isal: not in enabled drivers build config 00:02:34.341 compress/mlx5: not in enabled drivers build config 00:02:34.341 compress/nitrox: not in enabled drivers build config 00:02:34.341 compress/octeontx: not in enabled drivers build config 00:02:34.341 compress/zlib: not in enabled drivers build config 00:02:34.341 regex/*: missing internal dependency, "regexdev" 00:02:34.341 ml/*: missing internal dependency, "mldev" 00:02:34.341 vdpa/ifc: not in enabled drivers build config 00:02:34.341 vdpa/mlx5: not in enabled drivers build config 00:02:34.341 vdpa/nfp: not in enabled drivers build config 00:02:34.341 vdpa/sfc: not in enabled drivers build config 00:02:34.341 event/*: missing internal dependency, "eventdev" 00:02:34.341 baseband/*: missing internal dependency, "bbdev" 00:02:34.341 gpu/*: missing internal dependency, "gpudev" 00:02:34.341 00:02:34.341 00:02:34.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.599 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.599 Build targets in project: 85 00:02:34.599 00:02:34.599 DPDK 24.03.0 00:02:34.599 00:02:34.599 User defined options 00:02:34.599 buildtype : debug 00:02:34.599 default_library : static 00:02:34.599 libdir : lib 00:02:34.599 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.599 b_sanitize : address 00:02:34.599 c_args : -fPIC -Werror 00:02:34.599 c_link_args : 00:02:34.599 cpu_instruction_set: native 00:02:34.599 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:02:34.599 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,argparse,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:02:34.599 enable_docs : false 00:02:34.599 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.599 enable_kmods : false 00:02:34.599 max_lcores : 128 00:02:34.599 tests : false 00:02:34.599 00:02:34.599 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.599 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.599 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.856 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.113 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.113 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:35.113 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.113 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:35.371 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:35.371 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:35.371 [4/267] Linking static target lib/librte_kvargs.a 00:02:35.371 [5/267] Linking static target lib/librte_log.a 00:02:35.371 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:35.371 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.371 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.371 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.630 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.630 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.630 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.630 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:35.630 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.630 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.630 [14/267] Linking static target lib/librte_telemetry.a 00:02:35.630 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:35.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.630 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.889 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.889 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.889 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.889 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.889 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.889 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.889 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.889 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.889 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.889 [23/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.889 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.889 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.147 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.147 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.147 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.406 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.406 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.406 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.406 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.406 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.406 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.406 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.406 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.406 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.406 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.406 [38/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.664 [39/267] Linking target lib/librte_log.so.24.1 00:02:36.664 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.664 [41/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.664 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.664 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.664 [44/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:36.664 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:36.664 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.664 [47/267] Linking target lib/librte_kvargs.so.24.1 00:02:36.664 [48/267] Linking target lib/librte_telemetry.so.24.1 00:02:36.664 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.664 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.924 [51/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:36.924 [52/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.924 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.924 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.924 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.924 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:36.924 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.924 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.924 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.184 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.184 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.184 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.184 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.184 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.184 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.184 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.184 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.184 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.443 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.443 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.443 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.443 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.443 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.443 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.443 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.443 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.443 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.702 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:37.702 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.702 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.702 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.702 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.702 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.702 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.702 [85/267] Linking static target lib/librte_ring.a 00:02:37.702 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.961 [87/267] Linking static target lib/librte_eal.a 00:02:37.961 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.961 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.961 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.961 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.961 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.961 [93/267] Linking static target lib/librte_mempool.a 00:02:37.961 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.220 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.220 [96/267] Linking static target lib/librte_rcu.a 00:02:38.220 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.220 [98/267] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.220 [99/267] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.220 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.220 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.478 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.478 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.478 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.478 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.478 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.478 [107/267] Linking static target lib/librte_net.a 00:02:38.737 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.737 [109/267] Linking static target lib/librte_meter.a 00:02:38.737 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.737 [111/267] Linking static target lib/librte_mbuf.a 00:02:38.737 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:38.737 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.737 [114/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.737 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.995 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.995 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.995 [118/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.253 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.253 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.253 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:39.253 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.511 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:39.511 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.511 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:39.511 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.511 [127/267] Linking static target lib/librte_pci.a 00:02:39.769 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.770 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.770 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:39.770 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.770 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.770 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:39.770 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:39.770 [135/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.770 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.770 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.770 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.770 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.770 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.770 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.028 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:40.028 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:40.028 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.028 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.028 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.028 [147/267] Linking static target lib/librte_cmdline.a 00:02:40.286 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:40.286 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.286 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.286 [151/267] Linking static target lib/librte_timer.a 00:02:40.286 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.286 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.286 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.544 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.802 [156/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.802 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.802 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.802 [159/267] Linking static target lib/librte_compressdev.a 00:02:40.802 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.802 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.060 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.060 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:41.060 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.060 [165/267] Linking static target lib/librte_hash.a 00:02:41.060 [166/267] Linking static target lib/librte_dmadev.a 00:02:41.060 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.060 [168/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.060 [169/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:41.060 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.060 [171/267] Linking static target lib/librte_ethdev.a 00:02:41.060 [172/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.060 [173/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:41.318 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.318 [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:41.576 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:41.576 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:41.576 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.576 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:41.576 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:41.576 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:41.834 [182/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.834 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:41.834 [184/267] Linking static target lib/librte_power.a 00:02:41.834 [185/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.834 [186/267] Linking static target lib/librte_cryptodev.a 00:02:42.093 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:42.093 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:42.093 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:42.093 [190/267] Linking static target lib/librte_reorder.a 00:02:42.093 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:42.093 [192/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:42.093 [193/267] Linking static target lib/librte_security.a 00:02:42.351 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.610 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.610 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:42.610 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.610 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:42.869 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:42.869 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.869 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:42.869 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:42.869 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:42.869 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.182 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:43.182 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:43.182 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:43.182 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.440 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:43.440 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:43.440 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:43.440 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.440 [213/267] Linking static target drivers/librte_bus_vdev.a 00:02:43.440 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.440 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:43.440 [216/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:43.440 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.440 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.440 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:43.440 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:43.698 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:43.698 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.699 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.699 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.699 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:43.957 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.331 [227/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.331 [228/267] Linking target lib/librte_eal.so.24.1 00:02:45.331 [229/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:45.331 [230/267] Linking target lib/librte_pci.so.24.1 00:02:45.331 [231/267] Linking target lib/librte_timer.so.24.1 00:02:45.331 [232/267] Linking target lib/librte_ring.so.24.1 00:02:45.331 [233/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:45.331 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:45.331 [235/267] Linking target lib/librte_meter.so.24.1 00:02:45.331 [236/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:45.331 [237/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.331 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:45.331 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:45.331 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:45.331 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:45.331 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:45.331 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:45.593 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:45.593 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:45.593 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:45.593 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:45.593 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:45.851 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:45.851 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:02:45.851 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:45.851 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:45.851 [253/267] Linking target lib/librte_net.so.24.1 00:02:45.851 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:45.851 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:46.109 [256/267] Linking target lib/librte_hash.so.24.1 00:02:46.109 [257/267] Linking target lib/librte_security.so.24.1 00:02:46.109 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:46.109 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:47.043 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.043 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:47.301 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:47.301 [263/267] Linking target lib/librte_power.so.24.1 00:02:49.828 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.828 [265/267] Linking static target lib/librte_vhost.a 00:02:51.199 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.199 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:51.199 INFO: autodetecting backend as ninja 00:02:51.199 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:52.131 CC lib/log/log.o 00:02:52.131 CC lib/ut_mock/mock.o 00:02:52.131 CC lib/log/log_deprecated.o 00:02:52.131 CC lib/log/log_flags.o 00:02:52.131 CC lib/ut/ut.o 00:02:52.388 LIB libspdk_ut.a 00:02:52.388 LIB libspdk_ut_mock.a 00:02:52.388 LIB libspdk_log.a 00:02:52.645 CC lib/dma/dma.o 00:02:52.645 CC lib/util/bit_array.o 00:02:52.645 CC lib/util/base64.o 00:02:52.645 CC lib/util/crc16.o 00:02:52.645 CC lib/util/cpuset.o 00:02:52.645 CXX lib/trace_parser/trace.o 00:02:52.645 CC lib/ioat/ioat.o 00:02:52.645 CC lib/util/crc32c.o 00:02:52.645 CC lib/util/crc32.o 00:02:52.645 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.645 CC lib/vfio_user/host/vfio_user.o 00:02:52.903 CC lib/util/crc32_ieee.o 00:02:52.903 CC lib/util/crc64.o 00:02:52.903 CC lib/util/dif.o 00:02:52.903 CC lib/util/fd.o 00:02:52.903 LIB libspdk_dma.a 00:02:52.903 CC lib/util/file.o 00:02:52.903 CC lib/util/hexlify.o 00:02:52.903 CC lib/util/iov.o 00:02:52.903 CC lib/util/math.o 00:02:52.903 CC lib/util/pipe.o 00:02:52.903 CC lib/util/strerror_tls.o 00:02:52.903 CC lib/util/string.o 00:02:52.903 LIB libspdk_ioat.a 00:02:52.903 LIB libspdk_vfio_user.a 00:02:53.161 CC lib/util/uuid.o 00:02:53.161 CC lib/util/fd_group.o 00:02:53.161 CC lib/util/xor.o 00:02:53.161 CC lib/util/zipf.o 00:02:53.419 LIB libspdk_util.a 00:02:53.676 CC lib/rdma_provider/common.o 00:02:53.676 CC lib/vmd/vmd.o 00:02:53.676 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.676 CC lib/vmd/led.o 00:02:53.676 CC lib/rdma_utils/rdma_utils.o 00:02:53.676 CC lib/env_dpdk/env.o 00:02:53.676 CC lib/json/json_parse.o 00:02:53.676 CC lib/idxd/idxd.o 00:02:53.676 CC lib/conf/conf.o 00:02:53.933 CC lib/idxd/idxd_user.o 00:02:53.933 CC lib/json/json_util.o 00:02:53.933 LIB libspdk_rdma_provider.a 00:02:53.933 LIB libspdk_trace_parser.a 00:02:53.933 CC lib/json/json_write.o 00:02:53.933 CC lib/env_dpdk/memory.o 00:02:53.933 LIB libspdk_conf.a 00:02:53.933 CC lib/env_dpdk/pci.o 00:02:53.933 LIB libspdk_rdma_utils.a 00:02:53.933 CC lib/env_dpdk/init.o 00:02:53.933 CC lib/env_dpdk/threads.o 00:02:54.192 CC lib/env_dpdk/pci_ioat.o 00:02:54.192 CC lib/env_dpdk/pci_virtio.o 00:02:54.192 CC lib/env_dpdk/pci_vmd.o 00:02:54.192 LIB libspdk_json.a 00:02:54.192 CC lib/env_dpdk/pci_idxd.o 00:02:54.192 CC lib/env_dpdk/pci_event.o 00:02:54.192 CC lib/env_dpdk/sigbus_handler.o 00:02:54.192 CC lib/env_dpdk/pci_dpdk.o 00:02:54.449 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.449 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.449 LIB libspdk_idxd.a 00:02:54.449 LIB libspdk_vmd.a 00:02:54.449 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.449 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.449 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.449 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.706 LIB libspdk_jsonrpc.a 00:02:54.963 CC lib/rpc/rpc.o 00:02:55.221 LIB libspdk_env_dpdk.a 00:02:55.221 LIB libspdk_rpc.a 00:02:55.221 CC lib/notify/notify.o 00:02:55.221 CC lib/keyring/keyring_rpc.o 00:02:55.221 CC lib/keyring/keyring.o 00:02:55.221 CC lib/notify/notify_rpc.o 00:02:55.478 CC lib/trace/trace.o 00:02:55.478 CC lib/trace/trace_flags.o 00:02:55.478 CC lib/trace/trace_rpc.o 00:02:55.478 LIB libspdk_notify.a 00:02:55.736 LIB libspdk_keyring.a 00:02:55.736 LIB libspdk_trace.a 00:02:55.736 CC lib/thread/iobuf.o 00:02:55.736 CC lib/thread/thread.o 00:02:55.736 CC lib/sock/sock.o 00:02:55.736 CC lib/sock/sock_rpc.o 00:02:56.302 LIB libspdk_sock.a 00:02:56.302 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.302 CC lib/nvme/nvme_ctrlr.o 00:02:56.302 CC lib/nvme/nvme_fabric.o 00:02:56.302 CC lib/nvme/nvme_ns_cmd.o 00:02:56.302 CC lib/nvme/nvme_ns.o 00:02:56.302 CC lib/nvme/nvme_pcie.o 00:02:56.302 CC lib/nvme/nvme_qpair.o 00:02:56.302 CC lib/nvme/nvme_pcie_common.o 00:02:56.302 CC lib/nvme/nvme.o 00:02:56.868 CC lib/nvme/nvme_quirks.o 00:02:57.126 CC lib/nvme/nvme_transport.o 00:02:57.126 CC lib/nvme/nvme_discovery.o 00:02:57.126 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.385 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.385 CC lib/nvme/nvme_tcp.o 00:02:57.385 CC lib/nvme/nvme_opal.o 00:02:57.385 CC lib/nvme/nvme_io_msg.o 00:02:57.385 CC lib/nvme/nvme_poll_group.o 00:02:57.642 CC lib/nvme/nvme_zns.o 00:02:57.642 CC lib/nvme/nvme_stubs.o 00:02:57.642 LIB libspdk_thread.a 00:02:57.642 CC lib/nvme/nvme_auth.o 00:02:57.642 CC lib/nvme/nvme_cuse.o 00:02:57.901 CC lib/nvme/nvme_rdma.o 00:02:57.901 CC lib/accel/accel.o 00:02:57.901 CC lib/blob/blobstore.o 00:02:57.901 CC lib/accel/accel_rpc.o 00:02:57.901 CC lib/init/json_config.o 00:02:58.158 CC lib/accel/accel_sw.o 00:02:58.158 CC lib/virtio/virtio.o 00:02:58.158 CC lib/virtio/virtio_vhost_user.o 00:02:58.158 CC lib/init/subsystem.o 00:02:58.415 CC lib/init/subsystem_rpc.o 00:02:58.415 CC lib/virtio/virtio_vfio_user.o 00:02:58.415 CC lib/virtio/virtio_pci.o 00:02:58.674 CC lib/init/rpc.o 00:02:58.674 CC lib/blob/request.o 00:02:58.674 CC lib/blob/zeroes.o 00:02:58.674 CC lib/blob/blob_bs_dev.o 00:02:58.674 LIB libspdk_init.a 00:02:58.932 LIB libspdk_virtio.a 00:02:58.932 CC lib/event/app.o 00:02:58.932 CC lib/event/reactor.o 00:02:58.932 CC lib/event/log_rpc.o 00:02:58.932 CC lib/event/app_rpc.o 00:02:58.932 CC lib/event/scheduler_static.o 00:02:59.190 LIB libspdk_accel.a 00:02:59.190 LIB libspdk_nvme.a 00:02:59.190 CC lib/bdev/bdev.o 00:02:59.190 CC lib/bdev/bdev_rpc.o 00:02:59.190 CC lib/bdev/part.o 00:02:59.190 CC lib/bdev/bdev_zone.o 00:02:59.190 CC lib/bdev/scsi_nvme.o 00:02:59.448 LIB libspdk_event.a 00:03:01.997 LIB libspdk_blob.a 00:03:01.997 CC lib/blobfs/tree.o 00:03:01.997 CC lib/blobfs/blobfs.o 00:03:01.997 CC lib/lvol/lvol.o 00:03:02.255 LIB libspdk_bdev.a 00:03:02.513 CC lib/nvmf/ctrlr_discovery.o 00:03:02.513 CC lib/nvmf/ctrlr_bdev.o 00:03:02.513 CC lib/nvmf/subsystem.o 00:03:02.513 CC lib/nvmf/ctrlr.o 00:03:02.513 CC lib/nbd/nbd.o 00:03:02.513 CC lib/nvmf/nvmf.o 00:03:02.513 CC lib/ftl/ftl_core.o 00:03:02.513 CC lib/scsi/dev.o 00:03:02.771 CC lib/scsi/lun.o 00:03:03.030 LIB libspdk_blobfs.a 00:03:03.030 CC lib/ftl/ftl_init.o 00:03:03.030 CC lib/nbd/nbd_rpc.o 00:03:03.030 CC lib/ftl/ftl_layout.o 00:03:03.030 LIB libspdk_lvol.a 00:03:03.030 CC lib/ftl/ftl_debug.o 00:03:03.030 CC lib/ftl/ftl_io.o 00:03:03.030 CC lib/scsi/port.o 00:03:03.289 CC lib/scsi/scsi.o 00:03:03.289 LIB libspdk_nbd.a 00:03:03.289 CC lib/scsi/scsi_bdev.o 00:03:03.289 CC lib/nvmf/nvmf_rpc.o 00:03:03.289 CC lib/scsi/scsi_pr.o 00:03:03.289 CC lib/nvmf/transport.o 00:03:03.289 CC lib/nvmf/tcp.o 00:03:03.289 CC lib/nvmf/stubs.o 00:03:03.289 CC lib/ftl/ftl_sb.o 00:03:03.548 CC lib/ftl/ftl_l2p.o 00:03:03.548 CC lib/scsi/scsi_rpc.o 00:03:03.806 CC lib/nvmf/mdns_server.o 00:03:03.806 CC lib/scsi/task.o 00:03:03.806 CC lib/ftl/ftl_l2p_flat.o 00:03:03.806 CC lib/nvmf/rdma.o 00:03:03.806 CC lib/ftl/ftl_nv_cache.o 00:03:03.806 CC lib/ftl/ftl_band.o 00:03:04.064 LIB libspdk_scsi.a 00:03:04.064 CC lib/ftl/ftl_band_ops.o 00:03:04.064 CC lib/ftl/ftl_writer.o 00:03:04.064 CC lib/ftl/ftl_rq.o 00:03:04.322 CC lib/ftl/ftl_reloc.o 00:03:04.322 CC lib/ftl/ftl_l2p_cache.o 00:03:04.322 CC lib/iscsi/conn.o 00:03:04.322 CC lib/ftl/ftl_p2l.o 00:03:04.322 CC lib/vhost/vhost.o 00:03:04.322 CC lib/vhost/vhost_rpc.o 00:03:04.580 CC lib/vhost/vhost_scsi.o 00:03:04.838 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.838 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.838 CC lib/vhost/vhost_blk.o 00:03:04.838 CC lib/vhost/rte_vhost_user.o 00:03:05.096 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.096 CC lib/iscsi/init_grp.o 00:03:05.096 CC lib/iscsi/iscsi.o 00:03:05.096 CC lib/iscsi/md5.o 00:03:05.096 CC lib/iscsi/param.o 00:03:05.096 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.354 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.354 CC lib/iscsi/portal_grp.o 00:03:05.354 CC lib/iscsi/tgt_node.o 00:03:05.354 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.354 CC lib/iscsi/iscsi_subsystem.o 00:03:05.613 CC lib/iscsi/iscsi_rpc.o 00:03:05.613 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.613 CC lib/iscsi/task.o 00:03:05.613 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.871 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.871 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.871 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.871 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.871 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.871 CC lib/ftl/utils/ftl_conf.o 00:03:05.871 CC lib/ftl/utils/ftl_md.o 00:03:06.131 LIB libspdk_vhost.a 00:03:06.131 CC lib/ftl/utils/ftl_mempool.o 00:03:06.131 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.131 CC lib/ftl/utils/ftl_property.o 00:03:06.131 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.131 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.131 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.392 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.392 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.392 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.392 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.392 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.392 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.392 LIB libspdk_nvmf.a 00:03:06.392 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.392 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.650 CC lib/ftl/base/ftl_base_dev.o 00:03:06.650 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.650 CC lib/ftl/ftl_trace.o 00:03:06.650 LIB libspdk_iscsi.a 00:03:06.908 LIB libspdk_ftl.a 00:03:07.167 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.167 CC module/accel/dsa/accel_dsa.o 00:03:07.167 CC module/accel/ioat/accel_ioat.o 00:03:07.167 CC module/sock/posix/posix.o 00:03:07.167 CC module/accel/error/accel_error.o 00:03:07.167 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.167 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.167 CC module/blob/bdev/blob_bdev.o 00:03:07.167 CC module/accel/iaa/accel_iaa.o 00:03:07.167 CC module/keyring/file/keyring.o 00:03:07.426 LIB libspdk_env_dpdk_rpc.a 00:03:07.426 CC module/accel/iaa/accel_iaa_rpc.o 00:03:07.426 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.426 CC module/keyring/file/keyring_rpc.o 00:03:07.426 CC module/accel/error/accel_error_rpc.o 00:03:07.426 CC module/accel/ioat/accel_ioat_rpc.o 00:03:07.426 LIB libspdk_scheduler_dynamic.a 00:03:07.426 CC module/accel/dsa/accel_dsa_rpc.o 00:03:07.426 LIB libspdk_accel_iaa.a 00:03:07.426 LIB libspdk_keyring_file.a 00:03:07.684 LIB libspdk_accel_error.a 00:03:07.684 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.684 LIB libspdk_accel_ioat.a 00:03:07.684 LIB libspdk_blob_bdev.a 00:03:07.684 LIB libspdk_accel_dsa.a 00:03:07.684 CC module/keyring/linux/keyring_rpc.o 00:03:07.684 CC module/keyring/linux/keyring.o 00:03:07.684 LIB libspdk_scheduler_gscheduler.a 00:03:07.684 LIB libspdk_keyring_linux.a 00:03:07.685 CC module/bdev/gpt/gpt.o 00:03:07.685 CC module/bdev/error/vbdev_error.o 00:03:07.685 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.685 CC module/bdev/delay/vbdev_delay.o 00:03:07.685 CC module/bdev/malloc/bdev_malloc.o 00:03:07.685 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.944 CC module/bdev/null/bdev_null.o 00:03:07.944 CC module/bdev/nvme/bdev_nvme.o 00:03:07.944 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.944 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.944 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.944 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.201 CC module/bdev/null/bdev_null_rpc.o 00:03:08.201 LIB libspdk_blobfs_bdev.a 00:03:08.201 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.201 LIB libspdk_sock_posix.a 00:03:08.201 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.201 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.201 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.201 CC module/bdev/nvme/nvme_rpc.o 00:03:08.201 LIB libspdk_bdev_error.a 00:03:08.201 LIB libspdk_bdev_gpt.a 00:03:08.201 LIB libspdk_bdev_null.a 00:03:08.459 LIB libspdk_bdev_delay.a 00:03:08.459 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.459 LIB libspdk_bdev_passthru.a 00:03:08.459 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.459 CC module/bdev/nvme/vbdev_opal.o 00:03:08.459 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.459 LIB libspdk_bdev_malloc.a 00:03:08.459 CC module/bdev/raid/bdev_raid.o 00:03:08.459 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.460 CC module/bdev/split/vbdev_split.o 00:03:08.460 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.460 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.718 LIB libspdk_bdev_split.a 00:03:08.718 CC module/bdev/aio/bdev_aio.o 00:03:08.718 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.718 LIB libspdk_bdev_lvol.a 00:03:08.718 CC module/bdev/ftl/bdev_ftl.o 00:03:08.718 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.718 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.718 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.978 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.978 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.978 LIB libspdk_bdev_zone_block.a 00:03:08.978 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.978 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.978 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.237 LIB libspdk_bdev_aio.a 00:03:09.237 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.237 CC module/bdev/raid/raid0.o 00:03:09.237 CC module/bdev/raid/raid1.o 00:03:09.237 CC module/bdev/raid/concat.o 00:03:09.237 CC module/bdev/raid/raid5f.o 00:03:09.237 LIB libspdk_bdev_ftl.a 00:03:09.237 LIB libspdk_bdev_iscsi.a 00:03:09.494 LIB libspdk_bdev_virtio.a 00:03:09.752 LIB libspdk_bdev_raid.a 00:03:10.318 LIB libspdk_bdev_nvme.a 00:03:10.885 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.885 CC module/event/subsystems/sock/sock.o 00:03:10.885 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.885 CC module/event/subsystems/keyring/keyring.o 00:03:10.885 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.885 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.885 CC module/event/subsystems/vmd/vmd.o 00:03:10.885 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.885 LIB libspdk_event_keyring.a 00:03:10.885 LIB libspdk_event_vhost_blk.a 00:03:10.885 LIB libspdk_event_sock.a 00:03:10.885 LIB libspdk_event_scheduler.a 00:03:10.885 LIB libspdk_event_iobuf.a 00:03:10.885 LIB libspdk_event_vmd.a 00:03:11.144 CC module/event/subsystems/accel/accel.o 00:03:11.144 LIB libspdk_event_accel.a 00:03:11.403 CC module/event/subsystems/bdev/bdev.o 00:03:11.662 LIB libspdk_event_bdev.a 00:03:11.662 CC module/event/subsystems/scsi/scsi.o 00:03:11.662 CC module/event/subsystems/nbd/nbd.o 00:03:11.662 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.662 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.920 LIB libspdk_event_nbd.a 00:03:11.920 LIB libspdk_event_scsi.a 00:03:11.920 LIB libspdk_event_nvmf.a 00:03:12.179 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.179 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.179 LIB libspdk_event_vhost_scsi.a 00:03:12.179 LIB libspdk_event_iscsi.a 00:03:12.436 CC app/trace_record/trace_record.o 00:03:12.436 CC app/spdk_lspci/spdk_lspci.o 00:03:12.436 CXX app/trace/trace.o 00:03:12.436 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.436 CC app/nvmf_tgt/nvmf_main.o 00:03:12.436 CC app/spdk_tgt/spdk_tgt.o 00:03:12.436 CC test/thread/poller_perf/poller_perf.o 00:03:12.436 CC examples/util/zipf/zipf.o 00:03:12.694 CC test/dma/test_dma/test_dma.o 00:03:12.694 CC test/app/bdev_svc/bdev_svc.o 00:03:12.694 LINK spdk_lspci 00:03:12.694 LINK poller_perf 00:03:12.694 LINK nvmf_tgt 00:03:12.694 LINK spdk_trace_record 00:03:12.694 LINK iscsi_tgt 00:03:12.694 LINK zipf 00:03:12.951 LINK bdev_svc 00:03:12.952 LINK spdk_tgt 00:03:12.952 LINK spdk_trace 00:03:12.952 LINK test_dma 00:03:13.568 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.568 CC examples/ioat/perf/perf.o 00:03:13.568 CC examples/ioat/verify/verify.o 00:03:13.568 CC test/thread/lock/spdk_lock.o 00:03:13.568 LINK ioat_perf 00:03:13.568 LINK verify 00:03:13.827 CC app/spdk_nvme_perf/perf.o 00:03:13.827 LINK nvme_fuzz 00:03:14.392 CC app/spdk_nvme_identify/identify.o 00:03:14.392 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.392 LINK lsvmd 00:03:14.650 LINK spdk_nvme_perf 00:03:14.908 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.166 LINK spdk_nvme_identify 00:03:15.423 LINK spdk_lock 00:03:15.423 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.423 CC examples/vmd/led/led.o 00:03:15.682 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.682 LINK led 00:03:15.682 TEST_HEADER include/spdk/ioat.h 00:03:15.682 TEST_HEADER include/spdk/blobfs.h 00:03:15.682 TEST_HEADER include/spdk/notify.h 00:03:15.682 TEST_HEADER include/spdk/pipe.h 00:03:15.682 TEST_HEADER include/spdk/accel.h 00:03:15.682 TEST_HEADER include/spdk/file.h 00:03:15.682 TEST_HEADER include/spdk/version.h 00:03:15.682 TEST_HEADER include/spdk/trace_parser.h 00:03:15.682 TEST_HEADER include/spdk/opal_spec.h 00:03:15.682 TEST_HEADER include/spdk/uuid.h 00:03:15.682 TEST_HEADER include/spdk/likely.h 00:03:15.682 TEST_HEADER include/spdk/dif.h 00:03:15.682 TEST_HEADER include/spdk/keyring_module.h 00:03:15.682 TEST_HEADER include/spdk/memory.h 00:03:15.682 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:15.682 TEST_HEADER include/spdk/dma.h 00:03:15.682 TEST_HEADER include/spdk/nbd.h 00:03:15.682 TEST_HEADER include/spdk/conf.h 00:03:15.682 TEST_HEADER include/spdk/env_dpdk.h 00:03:15.682 TEST_HEADER include/spdk/nvmf_spec.h 00:03:15.682 TEST_HEADER include/spdk/iscsi_spec.h 00:03:15.682 TEST_HEADER include/spdk/mmio.h 00:03:15.682 TEST_HEADER include/spdk/json.h 00:03:15.682 TEST_HEADER include/spdk/opal.h 00:03:15.682 TEST_HEADER include/spdk/bdev.h 00:03:15.682 TEST_HEADER include/spdk/keyring.h 00:03:15.682 TEST_HEADER include/spdk/base64.h 00:03:15.682 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:15.940 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:15.940 TEST_HEADER include/spdk/fd.h 00:03:15.940 TEST_HEADER include/spdk/barrier.h 00:03:15.940 TEST_HEADER include/spdk/scsi_spec.h 00:03:15.940 TEST_HEADER include/spdk/zipf.h 00:03:15.940 TEST_HEADER include/spdk/nvmf.h 00:03:15.940 TEST_HEADER include/spdk/queue.h 00:03:15.940 TEST_HEADER include/spdk/xor.h 00:03:15.940 TEST_HEADER include/spdk/cpuset.h 00:03:15.940 TEST_HEADER include/spdk/thread.h 00:03:15.940 TEST_HEADER include/spdk/bdev_zone.h 00:03:15.940 TEST_HEADER include/spdk/fd_group.h 00:03:15.940 TEST_HEADER include/spdk/tree.h 00:03:15.940 TEST_HEADER include/spdk/blob_bdev.h 00:03:15.940 TEST_HEADER include/spdk/crc64.h 00:03:15.940 TEST_HEADER include/spdk/assert.h 00:03:15.940 TEST_HEADER include/spdk/nvme_spec.h 00:03:15.940 TEST_HEADER include/spdk/endian.h 00:03:15.940 TEST_HEADER include/spdk/pci_ids.h 00:03:15.940 TEST_HEADER include/spdk/log.h 00:03:15.940 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:15.940 TEST_HEADER include/spdk/ftl.h 00:03:15.940 TEST_HEADER include/spdk/config.h 00:03:15.940 TEST_HEADER include/spdk/vhost.h 00:03:15.940 TEST_HEADER include/spdk/bdev_module.h 00:03:15.940 TEST_HEADER include/spdk/nvme_intel.h 00:03:15.940 TEST_HEADER include/spdk/idxd_spec.h 00:03:15.940 TEST_HEADER include/spdk/crc16.h 00:03:15.940 TEST_HEADER include/spdk/nvme.h 00:03:15.940 TEST_HEADER include/spdk/stdinc.h 00:03:15.940 TEST_HEADER include/spdk/scsi.h 00:03:15.940 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:15.940 TEST_HEADER include/spdk/idxd.h 00:03:15.940 TEST_HEADER include/spdk/hexlify.h 00:03:15.940 TEST_HEADER include/spdk/reduce.h 00:03:15.940 TEST_HEADER include/spdk/crc32.h 00:03:15.940 TEST_HEADER include/spdk/init.h 00:03:15.940 CC test/app/histogram_perf/histogram_perf.o 00:03:15.940 TEST_HEADER include/spdk/nvmf_transport.h 00:03:15.940 TEST_HEADER include/spdk/nvme_zns.h 00:03:15.940 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:15.940 TEST_HEADER include/spdk/util.h 00:03:15.940 TEST_HEADER include/spdk/jsonrpc.h 00:03:15.940 TEST_HEADER include/spdk/env.h 00:03:15.940 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:15.940 TEST_HEADER include/spdk/lvol.h 00:03:15.940 TEST_HEADER include/spdk/histogram_data.h 00:03:15.940 TEST_HEADER include/spdk/event.h 00:03:15.940 TEST_HEADER include/spdk/trace.h 00:03:15.940 TEST_HEADER include/spdk/ioat_spec.h 00:03:15.940 TEST_HEADER include/spdk/string.h 00:03:15.940 TEST_HEADER include/spdk/ublk.h 00:03:15.940 TEST_HEADER include/spdk/bit_array.h 00:03:15.940 CC test/app/jsoncat/jsoncat.o 00:03:15.940 TEST_HEADER include/spdk/scheduler.h 00:03:15.940 TEST_HEADER include/spdk/blob.h 00:03:15.940 TEST_HEADER include/spdk/gpt_spec.h 00:03:15.940 TEST_HEADER include/spdk/sock.h 00:03:15.940 TEST_HEADER include/spdk/vmd.h 00:03:15.940 TEST_HEADER include/spdk/rpc.h 00:03:15.940 TEST_HEADER include/spdk/accel_module.h 00:03:15.940 TEST_HEADER include/spdk/bit_pool.h 00:03:15.940 CXX test/cpp_headers/ioat.o 00:03:15.940 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.940 LINK vhost_fuzz 00:03:16.198 LINK histogram_perf 00:03:16.198 LINK jsoncat 00:03:16.198 CXX test/cpp_headers/blobfs.o 00:03:16.198 CC examples/idxd/perf/perf.o 00:03:16.198 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.198 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.198 CXX test/cpp_headers/notify.o 00:03:16.455 CXX test/cpp_headers/pipe.o 00:03:16.455 LINK interrupt_tgt 00:03:16.455 LINK spdk_nvme_discover 00:03:16.712 LINK idxd_perf 00:03:16.712 CXX test/cpp_headers/accel.o 00:03:16.712 LINK mem_callbacks 00:03:16.712 CXX test/cpp_headers/file.o 00:03:16.712 CXX test/cpp_headers/version.o 00:03:16.712 CC test/app/stub/stub.o 00:03:16.712 CC test/env/vtophys/vtophys.o 00:03:16.712 CXX test/cpp_headers/trace_parser.o 00:03:16.969 LINK iscsi_fuzz 00:03:16.969 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.969 LINK stub 00:03:16.969 CC test/env/memory/memory_ut.o 00:03:16.969 LINK vtophys 00:03:16.969 CXX test/cpp_headers/opal_spec.o 00:03:16.969 CXX test/cpp_headers/uuid.o 00:03:16.969 LINK env_dpdk_post_init 00:03:17.227 CXX test/cpp_headers/likely.o 00:03:17.227 CC test/env/pci/pci_ut.o 00:03:17.227 CXX test/cpp_headers/dif.o 00:03:17.484 CC app/spdk_top/spdk_top.o 00:03:17.484 CXX test/cpp_headers/keyring_module.o 00:03:17.484 CXX test/cpp_headers/memory.o 00:03:17.742 CC app/vhost/vhost.o 00:03:17.742 LINK pci_ut 00:03:17.742 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.742 CC test/nvme/aer/aer.o 00:03:17.999 CXX test/cpp_headers/dma.o 00:03:17.999 LINK vhost 00:03:17.999 CXX test/cpp_headers/nbd.o 00:03:17.999 CXX test/cpp_headers/conf.o 00:03:17.999 LINK memory_ut 00:03:17.999 LINK aer 00:03:17.999 CC examples/sock/hello_world/hello_sock.o 00:03:18.256 CC examples/thread/thread/thread_ex.o 00:03:18.256 CC app/spdk_dd/spdk_dd.o 00:03:18.256 CXX test/cpp_headers/env_dpdk.o 00:03:18.256 CC test/rpc_client/rpc_client_test.o 00:03:18.256 CXX test/cpp_headers/nvmf_spec.o 00:03:18.256 CC app/fio/nvme/fio_plugin.o 00:03:18.256 LINK hello_sock 00:03:18.514 LINK thread 00:03:18.514 LINK spdk_top 00:03:18.514 CXX test/cpp_headers/iscsi_spec.o 00:03:18.514 LINK rpc_client_test 00:03:18.514 CC app/fio/bdev/fio_plugin.o 00:03:18.514 LINK spdk_dd 00:03:18.514 CXX test/cpp_headers/mmio.o 00:03:18.772 CXX test/cpp_headers/json.o 00:03:19.036 CXX test/cpp_headers/opal.o 00:03:19.036 CXX test/cpp_headers/bdev.o 00:03:19.036 LINK spdk_nvme 00:03:19.036 LINK spdk_bdev 00:03:19.317 CC test/nvme/reset/reset.o 00:03:19.317 CXX test/cpp_headers/keyring.o 00:03:19.317 CC test/nvme/sgl/sgl.o 00:03:19.574 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:19.574 CC examples/nvme/hello_world/hello_world.o 00:03:19.574 CXX test/cpp_headers/base64.o 00:03:19.574 LINK histogram_ut 00:03:19.574 LINK reset 00:03:19.574 LINK sgl 00:03:19.574 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.832 LINK hello_world 00:03:19.832 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.832 CC test/unit/lib/log/log.c/log_ut.o 00:03:19.832 CXX test/cpp_headers/fd.o 00:03:20.396 CXX test/cpp_headers/barrier.o 00:03:20.396 LINK log_ut 00:03:20.397 CC test/nvme/e2edp/nvme_dp.o 00:03:20.653 CXX test/cpp_headers/scsi_spec.o 00:03:20.654 CXX test/cpp_headers/zipf.o 00:03:20.654 CC examples/nvme/reconnect/reconnect.o 00:03:20.910 LINK nvme_dp 00:03:21.167 CXX test/cpp_headers/nvmf.o 00:03:21.167 CC test/nvme/overhead/overhead.o 00:03:21.167 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:21.167 CC test/nvme/err_injection/err_injection.o 00:03:21.167 CC examples/nvme/arbitration/arbitration.o 00:03:21.167 LINK reconnect 00:03:21.167 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:21.425 CC test/nvme/startup/startup.o 00:03:21.425 LINK err_injection 00:03:21.425 CXX test/cpp_headers/queue.o 00:03:21.425 CXX test/cpp_headers/xor.o 00:03:21.425 LINK startup 00:03:21.425 LINK overhead 00:03:21.682 LINK arbitration 00:03:21.682 CXX test/cpp_headers/cpuset.o 00:03:21.682 LINK nvme_manage 00:03:21.939 CXX test/cpp_headers/thread.o 00:03:21.939 CC test/nvme/reserve/reserve.o 00:03:21.939 CC test/nvme/simple_copy/simple_copy.o 00:03:21.939 CXX test/cpp_headers/bdev_zone.o 00:03:21.939 LINK common_ut 00:03:22.196 LINK reserve 00:03:22.196 LINK simple_copy 00:03:22.196 CXX test/cpp_headers/fd_group.o 00:03:22.454 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:22.454 CXX test/cpp_headers/tree.o 00:03:22.454 CXX test/cpp_headers/blob_bdev.o 00:03:22.454 CXX test/cpp_headers/crc64.o 00:03:22.454 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:22.454 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:22.711 LINK base64_ut 00:03:22.711 CC examples/accel/perf/accel_perf.o 00:03:22.711 CXX test/cpp_headers/assert.o 00:03:22.711 CC examples/nvme/hotplug/hotplug.o 00:03:22.969 CC examples/blob/hello_world/hello_blob.o 00:03:22.969 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:22.969 CXX test/cpp_headers/nvme_spec.o 00:03:22.969 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:22.969 LINK hotplug 00:03:22.969 CXX test/cpp_headers/endian.o 00:03:23.226 CXX test/cpp_headers/pci_ids.o 00:03:23.226 LINK hello_blob 00:03:23.226 LINK cmb_copy 00:03:23.226 LINK ioat_ut 00:03:23.226 LINK accel_perf 00:03:23.226 CXX test/cpp_headers/log.o 00:03:23.226 CC examples/nvme/abort/abort.o 00:03:23.226 LINK dma_ut 00:03:23.485 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.485 CC test/nvme/connect_stress/connect_stress.o 00:03:23.485 LINK bit_array_ut 00:03:23.485 CC test/nvme/boot_partition/boot_partition.o 00:03:23.485 CXX test/cpp_headers/ftl.o 00:03:23.485 CC test/nvme/compliance/nvme_compliance.o 00:03:23.743 LINK connect_stress 00:03:23.743 LINK boot_partition 00:03:23.743 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:23.743 LINK abort 00:03:23.743 CXX test/cpp_headers/config.o 00:03:23.743 CXX test/cpp_headers/vhost.o 00:03:24.001 LINK cpuset_ut 00:03:24.001 CXX test/cpp_headers/bdev_module.o 00:03:24.001 LINK nvme_compliance 00:03:24.001 CXX test/cpp_headers/nvme_intel.o 00:03:24.001 CXX test/cpp_headers/idxd_spec.o 00:03:24.259 CXX test/cpp_headers/crc16.o 00:03:24.259 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:24.259 CC test/nvme/fused_ordering/fused_ordering.o 00:03:24.259 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:24.259 CXX test/cpp_headers/nvme.o 00:03:24.259 CC test/nvme/fdp/fdp.o 00:03:24.259 LINK crc16_ut 00:03:24.517 CXX test/cpp_headers/stdinc.o 00:03:24.517 LINK doorbell_aers 00:03:24.517 LINK fused_ordering 00:03:24.775 CXX test/cpp_headers/scsi.o 00:03:24.775 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.775 LINK fdp 00:03:24.775 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:24.775 CC test/nvme/cuse/cuse.o 00:03:24.775 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.067 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:25.067 LINK crc32_ieee_ut 00:03:25.067 CXX test/cpp_headers/idxd.o 00:03:25.067 CXX test/cpp_headers/hexlify.o 00:03:25.067 LINK pmr_persistence 00:03:25.067 LINK crc32c_ut 00:03:25.067 CXX test/cpp_headers/reduce.o 00:03:25.331 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:25.331 CC examples/blob/cli/blobcli.o 00:03:25.331 CXX test/cpp_headers/crc32.o 00:03:25.589 LINK crc64_ut 00:03:25.589 CXX test/cpp_headers/init.o 00:03:25.589 CC examples/bdev/hello_world/hello_bdev.o 00:03:25.848 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:25.848 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:25.848 CC test/unit/lib/util/math.c/math_ut.o 00:03:25.848 CXX test/cpp_headers/nvmf_transport.o 00:03:25.848 LINK blobcli 00:03:25.848 CC test/unit/lib/util/string.c/string_ut.o 00:03:26.106 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:26.106 LINK hello_bdev 00:03:26.106 CXX test/cpp_headers/nvme_zns.o 00:03:26.106 LINK math_ut 00:03:26.106 LINK iov_ut 00:03:26.364 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:26.364 LINK cuse 00:03:26.364 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.364 LINK string_ut 00:03:26.364 CXX test/cpp_headers/util.o 00:03:26.364 CXX test/cpp_headers/jsonrpc.o 00:03:26.622 CC examples/bdev/bdevperf/bdevperf.o 00:03:26.622 CXX test/cpp_headers/env.o 00:03:26.622 CC test/accel/dif/dif.o 00:03:26.622 CC test/blobfs/mkfs/mkfs.o 00:03:26.881 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.881 LINK pipe_ut 00:03:26.881 LINK xor_ut 00:03:26.881 LINK dif_ut 00:03:26.881 LINK mkfs 00:03:27.139 CC test/event/event_perf/event_perf.o 00:03:27.139 CXX test/cpp_headers/lvol.o 00:03:27.139 CXX test/cpp_headers/histogram_data.o 00:03:27.139 LINK dif 00:03:27.139 LINK event_perf 00:03:27.397 CXX test/cpp_headers/event.o 00:03:27.397 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:27.397 CC test/lvol/esnap/esnap.o 00:03:27.397 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:27.397 LINK bdevperf 00:03:27.397 CXX test/cpp_headers/trace.o 00:03:27.655 CXX test/cpp_headers/ioat_spec.o 00:03:27.923 CXX test/cpp_headers/string.o 00:03:27.923 LINK json_util_ut 00:03:27.923 CXX test/cpp_headers/ublk.o 00:03:28.185 CC test/event/reactor/reactor.o 00:03:28.185 CC test/event/reactor_perf/reactor_perf.o 00:03:28.185 CXX test/cpp_headers/bit_array.o 00:03:28.185 LINK reactor 00:03:28.185 LINK reactor_perf 00:03:28.442 CXX test/cpp_headers/scheduler.o 00:03:28.442 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:28.700 CXX test/cpp_headers/blob.o 00:03:28.700 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:28.700 CXX test/cpp_headers/gpt_spec.o 00:03:28.958 CXX test/cpp_headers/sock.o 00:03:28.958 CXX test/cpp_headers/vmd.o 00:03:28.958 CC test/event/app_repeat/app_repeat.o 00:03:29.216 CXX test/cpp_headers/rpc.o 00:03:29.216 LINK app_repeat 00:03:29.216 CC test/event/scheduler/scheduler.o 00:03:29.216 LINK json_write_ut 00:03:29.216 LINK pci_event_ut 00:03:29.216 CXX test/cpp_headers/accel_module.o 00:03:29.474 CXX test/cpp_headers/bit_pool.o 00:03:29.474 LINK scheduler 00:03:29.474 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:29.474 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:30.038 LINK json_parse_ut 00:03:30.038 CC test/bdev/bdevio/bdevio.o 00:03:30.296 LINK idxd_user_ut 00:03:30.296 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:30.556 LINK bdevio 00:03:30.815 LINK idxd_ut 00:03:30.815 CC examples/nvmf/nvmf/nvmf.o 00:03:30.815 LINK jsonrpc_server_ut 00:03:31.073 LINK nvmf 00:03:31.073 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:32.007 LINK rpc_ut 00:03:32.573 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:32.573 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:32.573 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:32.573 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:32.573 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:32.573 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:33.138 LINK keyring_ut 00:03:33.396 LINK notify_ut 00:03:33.654 LINK esnap 00:03:33.654 LINK iobuf_ut 00:03:33.654 LINK posix_ut 00:03:34.587 LINK sock_ut 00:03:34.844 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:34.844 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:35.102 LINK thread_ut 00:03:35.418 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:36.001 LINK nvme_poll_group_ut 00:03:36.001 LINK nvme_ns_ut 00:03:36.001 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:36.259 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:36.259 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:36.259 LINK nvme_ctrlr_cmd_ut 00:03:36.259 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:36.516 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:36.516 LINK nvme_ut 00:03:36.774 LINK nvme_qpair_ut 00:03:36.774 LINK nvme_quirks_ut 00:03:36.774 LINK nvme_ns_ocssd_cmd_ut 00:03:36.774 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:37.031 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:37.031 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:37.031 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:37.031 LINK nvme_pcie_ut 00:03:37.031 LINK nvme_ns_cmd_ut 00:03:37.289 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:37.548 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:37.807 LINK nvme_transport_ut 00:03:37.807 LINK nvme_io_msg_ut 00:03:38.064 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:38.064 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:38.064 LINK nvme_opal_ut 00:03:38.064 LINK nvme_fabric_ut 00:03:38.321 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:38.321 LINK nvme_ctrlr_ut 00:03:38.579 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:38.579 LINK nvme_pcie_common_ut 00:03:38.836 LINK blob_bdev_ut 00:03:39.094 LINK rpc_ut 00:03:39.352 LINK subsystem_ut 00:03:39.352 LINK nvme_tcp_ut 00:03:39.352 LINK nvme_cuse_ut 00:03:39.610 LINK accel_ut 00:03:39.610 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:39.610 CC test/unit/lib/event/app.c/app_ut.o 00:03:39.897 LINK nvme_rdma_ut 00:03:40.154 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:40.154 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:40.154 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:40.154 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:40.154 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:40.154 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:40.429 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:40.429 LINK scsi_nvme_ut 00:03:40.688 LINK app_ut 00:03:40.688 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:40.959 LINK gpt_ut 00:03:40.959 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:40.959 LINK reactor_ut 00:03:40.959 LINK bdev_zone_ut 00:03:41.217 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:41.217 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:41.217 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:41.782 LINK bdev_raid_sb_ut 00:03:41.782 LINK vbdev_lvol_ut 00:03:42.040 LINK concat_ut 00:03:42.040 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:42.040 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:42.298 LINK vbdev_zone_block_ut 00:03:42.555 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:42.813 LINK raid1_ut 00:03:43.071 LINK bdev_raid_ut 00:03:43.329 LINK raid0_ut 00:03:44.267 LINK raid5f_ut 00:03:44.267 LINK part_ut 00:03:44.525 LINK bdev_ut 00:03:46.424 LINK blob_ut 00:03:46.424 LINK bdev_ut 00:03:46.682 LINK bdev_nvme_ut 00:03:46.682 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:46.682 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:46.682 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:46.682 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:46.682 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:46.940 LINK tree_ut 00:03:46.940 LINK blobfs_bdev_ut 00:03:47.198 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:47.198 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:47.198 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:47.198 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:47.198 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:47.198 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:47.198 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:47.765 LINK dev_ut 00:03:47.765 LINK ftl_l2p_ut 00:03:47.765 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:48.022 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:48.280 LINK ftl_io_ut 00:03:48.537 LINK blobfs_sync_ut 00:03:48.537 LINK blobfs_async_ut 00:03:48.537 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:48.793 LINK lun_ut 00:03:48.793 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:49.051 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:49.051 LINK ftl_band_ut 00:03:49.051 LINK lvol_ut 00:03:49.051 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:49.309 LINK ftl_p2l_ut 00:03:49.309 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:49.309 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:49.567 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:49.567 LINK ftl_bitmap_ut 00:03:49.567 LINK scsi_ut 00:03:49.871 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:49.871 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:49.871 LINK ftl_mempool_ut 00:03:50.127 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:50.127 LINK subsystem_ut 00:03:50.127 LINK ctrlr_bdev_ut 00:03:50.384 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:50.384 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:50.949 LINK ctrlr_ut 00:03:50.949 LINK scsi_bdev_ut 00:03:50.949 LINK nvmf_ut 00:03:50.949 LINK ctrlr_discovery_ut 00:03:50.949 LINK ftl_mngt_ut 00:03:51.206 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:51.206 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:51.770 LINK scsi_pr_ut 00:03:51.770 LINK auth_ut 00:03:52.028 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:52.028 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:52.028 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:52.028 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:52.028 LINK ftl_layout_upgrade_ut 00:03:52.028 LINK tcp_ut 00:03:52.285 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:52.285 LINK ftl_sb_ut 00:03:52.285 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:52.542 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:52.542 LINK init_grp_ut 00:03:52.799 LINK param_ut 00:03:53.732 LINK portal_grp_ut 00:03:53.732 LINK conn_ut 00:03:53.990 LINK tgt_node_ut 00:03:53.990 LINK rdma_ut 00:03:54.926 LINK iscsi_ut 00:03:54.926 LINK vhost_ut 00:03:55.223 LINK transport_ut 00:03:55.481 00:03:55.481 real 2m5.479s 00:03:55.481 user 10m27.822s 00:03:55.481 sys 1m49.148s 00:03:55.481 08:30:30 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:55.481 08:30:30 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:55.481 ************************************ 00:03:55.481 END TEST unittest_build 00:03:55.481 ************************************ 00:03:55.481 08:30:30 -- common/autotest_common.sh@1142 -- $ return 0 00:03:55.481 08:30:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:55.481 08:30:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:55.481 08:30:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:55.481 08:30:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.481 08:30:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:55.481 08:30:30 -- pm/common@44 -- $ pid=2448 00:03:55.481 08:30:30 -- pm/common@50 -- $ kill -TERM 2448 00:03:55.481 08:30:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.481 08:30:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:55.481 08:30:30 -- pm/common@44 -- $ pid=2449 00:03:55.481 08:30:30 -- pm/common@50 -- $ kill -TERM 2449 00:03:55.481 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:55.481 08:30:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:55.481 08:30:30 -- nvmf/common.sh@7 -- # uname -s 00:03:55.481 08:30:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:55.481 08:30:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:55.481 08:30:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:55.481 08:30:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:55.481 08:30:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:55.481 08:30:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:55.481 08:30:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:55.481 08:30:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:55.481 08:30:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:55.481 08:30:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:55.481 08:30:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a063096-cd90-40de-ba0e-0c5c435a4ccc 00:03:55.481 08:30:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=9a063096-cd90-40de-ba0e-0c5c435a4ccc 00:03:55.481 08:30:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:55.481 08:30:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:55.481 08:30:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:55.481 08:30:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:55.481 08:30:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:55.481 08:30:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:55.481 08:30:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:55.481 08:30:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:55.481 08:30:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:55.481 08:30:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:55.481 08:30:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:55.481 08:30:30 -- paths/export.sh@5 -- # export PATH 00:03:55.481 08:30:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:55.481 08:30:30 -- nvmf/common.sh@47 -- # : 0 00:03:55.481 08:30:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:55.481 08:30:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:55.481 08:30:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:55.481 08:30:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:55.481 08:30:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:55.481 08:30:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:55.481 08:30:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:55.481 08:30:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:55.739 08:30:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:55.739 08:30:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:55.739 08:30:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:55.739 08:30:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:55.739 08:30:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:55.739 08:30:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:55.739 08:30:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:55.739 08:30:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:56.305 08:30:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:56.305 08:30:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:56.305 08:30:31 -- spdk/autotest.sh@48 -- # udevadm_pid=98930 00:03:56.305 08:30:31 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:56.305 08:30:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:56.305 08:30:31 -- pm/common@17 -- # local monitor 00:03:56.305 08:30:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.305 08:30:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.305 08:30:31 -- pm/common@25 -- # sleep 1 00:03:56.305 08:30:31 -- pm/common@21 -- # date +%s 00:03:56.305 08:30:31 -- pm/common@21 -- # date +%s 00:03:56.305 08:30:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720773031 00:03:56.305 08:30:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720773031 00:03:56.305 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720773031_collect-vmstat.pm.log 00:03:56.305 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720773031_collect-cpu-load.pm.log 00:03:57.240 08:30:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:57.240 08:30:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:57.240 08:30:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.240 08:30:32 -- common/autotest_common.sh@10 -- # set +x 00:03:57.240 08:30:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:57.240 08:30:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:57.240 08:30:32 -- common/autotest_common.sh@10 -- # set +x 00:03:57.240 08:30:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:57.240 08:30:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:57.240 08:30:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:57.240 08:30:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:57.240 08:30:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:57.240 08:30:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:57.240 08:30:32 -- common/autotest_common.sh@1455 -- # uname 00:03:57.240 08:30:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:57.240 08:30:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:57.240 08:30:32 -- common/autotest_common.sh@1475 -- # uname 00:03:57.240 08:30:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:57.240 08:30:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:57.240 08:30:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:57.240 08:30:32 -- spdk/autotest.sh@72 -- # hash lcov 00:03:57.240 08:30:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:57.240 08:30:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:57.240 --rc lcov_branch_coverage=1 00:03:57.240 --rc lcov_function_coverage=1 00:03:57.240 --rc genhtml_branch_coverage=1 00:03:57.240 --rc genhtml_function_coverage=1 00:03:57.240 --rc genhtml_legend=1 00:03:57.240 --rc geninfo_all_blocks=1 00:03:57.240 ' 00:03:57.240 08:30:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:57.240 --rc lcov_branch_coverage=1 00:03:57.240 --rc lcov_function_coverage=1 00:03:57.240 --rc genhtml_branch_coverage=1 00:03:57.240 --rc genhtml_function_coverage=1 00:03:57.240 --rc genhtml_legend=1 00:03:57.240 --rc geninfo_all_blocks=1 00:03:57.240 ' 00:03:57.240 08:30:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:57.240 --rc lcov_branch_coverage=1 00:03:57.240 --rc lcov_function_coverage=1 00:03:57.240 --rc genhtml_branch_coverage=1 00:03:57.240 --rc genhtml_function_coverage=1 00:03:57.241 --rc genhtml_legend=1 00:03:57.241 --rc geninfo_all_blocks=1 00:03:57.241 --no-external' 00:03:57.241 08:30:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:57.241 --rc lcov_branch_coverage=1 00:03:57.241 --rc lcov_function_coverage=1 00:03:57.241 --rc genhtml_branch_coverage=1 00:03:57.241 --rc genhtml_function_coverage=1 00:03:57.241 --rc genhtml_legend=1 00:03:57.241 --rc geninfo_all_blocks=1 00:03:57.241 --no-external' 00:03:57.241 08:30:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:57.241 lcov: LCOV version 1.15 00:03:57.241 08:30:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:59.143 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:59.143 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:59.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:59.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:59.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:59.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:59.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:59.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:59.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:59.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:59.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:59.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:59.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:59.144 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:59.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:59.403 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:59.403 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:59.404 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:59.404 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:59.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:59.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:59.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:59.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:55.882 08:31:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:55.882 08:31:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.882 08:31:27 -- common/autotest_common.sh@10 -- # set +x 00:04:55.882 08:31:27 -- spdk/autotest.sh@91 -- # rm -f 00:04:55.882 08:31:27 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:55.882 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:55.882 08:31:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:55.882 08:31:28 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:55.882 08:31:28 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:55.882 08:31:28 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:55.882 08:31:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:55.882 08:31:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:55.882 08:31:28 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:55.882 08:31:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.882 08:31:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:55.882 08:31:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:55.882 08:31:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:55.882 08:31:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:55.882 08:31:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:55.882 08:31:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:55.882 08:31:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:55.882 No valid GPT data, bailing 00:04:55.882 08:31:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:55.882 08:31:28 -- scripts/common.sh@391 -- # pt= 00:04:55.882 08:31:28 -- scripts/common.sh@392 -- # return 1 00:04:55.882 08:31:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:55.882 1+0 records in 00:04:55.882 1+0 records out 00:04:55.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187325 s, 56.0 MB/s 00:04:55.882 08:31:28 -- spdk/autotest.sh@118 -- # sync 00:04:55.882 08:31:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:55.882 08:31:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:55.883 08:31:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:55.883 08:31:29 -- spdk/autotest.sh@124 -- # uname -s 00:04:55.883 08:31:29 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:55.883 08:31:29 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:55.883 08:31:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.883 08:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.883 08:31:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 ************************************ 00:04:55.883 START TEST setup.sh 00:04:55.883 ************************************ 00:04:55.883 08:31:29 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:55.883 * Looking for test storage... 00:04:55.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:55.883 08:31:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:55.883 08:31:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:55.883 08:31:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:55.883 08:31:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.883 08:31:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.883 08:31:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 ************************************ 00:04:55.883 START TEST acl 00:04:55.883 ************************************ 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:55.883 * Looking for test storage... 00:04:55.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:55.883 08:31:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:55.883 08:31:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.883 08:31:29 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.883 08:31:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:55.883 08:31:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.883 08:31:29 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.883 Hugepages 00:04:55.883 node hugesize free / total 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.883 00:04:55.883 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:55.883 08:31:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:55.883 08:31:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.883 08:31:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.883 08:31:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 ************************************ 00:04:55.883 START TEST denied 00:04:55.883 ************************************ 00:04:55.883 08:31:30 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:55.883 08:31:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:55.883 08:31:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:55.883 08:31:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:55.883 08:31:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.883 08:31:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.819 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.819 08:31:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.393 ************************************ 00:04:57.393 END TEST denied 00:04:57.393 ************************************ 00:04:57.393 00:04:57.393 real 0m1.881s 00:04:57.393 user 0m0.515s 00:04:57.393 sys 0m1.420s 00:04:57.393 08:31:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.393 08:31:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:57.393 08:31:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:57.393 08:31:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:57.393 08:31:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.393 08:31:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.393 08:31:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:57.393 ************************************ 00:04:57.393 START TEST allowed 00:04:57.393 ************************************ 00:04:57.393 08:31:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:57.393 08:31:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:57.393 08:31:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:57.393 08:31:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:57.393 08:31:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.393 08:31:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.303 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.303 08:31:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:59.303 08:31:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:59.303 08:31:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:59.303 08:31:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.303 08:31:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.303 00:04:59.303 real 0m2.006s 00:04:59.303 user 0m0.422s 00:04:59.303 sys 0m1.558s 00:04:59.303 ************************************ 00:04:59.303 END TEST allowed 00:04:59.303 ************************************ 00:04:59.303 08:31:34 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.303 08:31:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:59.303 08:31:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:59.303 00:04:59.303 real 0m5.072s 00:04:59.303 user 0m1.594s 00:04:59.303 sys 0m3.558s 00:04:59.303 08:31:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.303 ************************************ 00:04:59.303 END TEST acl 00:04:59.303 ************************************ 00:04:59.303 08:31:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.562 08:31:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:59.562 08:31:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:59.562 08:31:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.562 08:31:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.562 08:31:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.562 ************************************ 00:04:59.562 START TEST hugepages 00:04:59.562 ************************************ 00:04:59.562 08:31:34 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:59.562 * Looking for test storage... 00:04:59.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 2785508 kB' 'MemAvailable: 7408636 kB' 'Buffers: 37756 kB' 'Cached: 4701876 kB' 'SwapCached: 0 kB' 'Active: 1235636 kB' 'Inactive: 3630244 kB' 'Active(anon): 135272 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100364 kB' 'Inactive(file): 3628444 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 144212 kB' 'Mapped: 73888 kB' 'Shmem: 2624 kB' 'KReclaimable: 216392 kB' 'Slab: 309660 kB' 'SReclaimable: 216392 kB' 'SUnreclaim: 93268 kB' 'KernelStack: 4620 kB' 'PageTables: 3700 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028396 kB' 'Committed_AS: 629472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.562 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.563 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:59.564 08:31:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:59.564 08:31:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.564 08:31:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.564 08:31:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.564 ************************************ 00:04:59.564 START TEST default_setup 00:04:59.564 ************************************ 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.564 08:31:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.080 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4880964 kB' 'MemAvailable: 9504176 kB' 'Buffers: 37756 kB' 'Cached: 4701984 kB' 'SwapCached: 0 kB' 'Active: 1240104 kB' 'Inactive: 3630148 kB' 'Active(anon): 139540 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1100564 kB' 'Inactive(file): 3628360 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 728 kB' 'Writeback: 0 kB' 'AnonPages: 148720 kB' 'Mapped: 73860 kB' 'Shmem: 2616 kB' 'KReclaimable: 216360 kB' 'Slab: 309608 kB' 'SReclaimable: 216360 kB' 'SUnreclaim: 93248 kB' 'KernelStack: 4532 kB' 'PageTables: 3328 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 636324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.652 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4881224 kB' 'MemAvailable: 9504436 kB' 'Buffers: 37756 kB' 'Cached: 4701984 kB' 'SwapCached: 0 kB' 'Active: 1240104 kB' 'Inactive: 3630148 kB' 'Active(anon): 139540 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1100564 kB' 'Inactive(file): 3628360 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 728 kB' 'Writeback: 0 kB' 'AnonPages: 148720 kB' 'Mapped: 73860 kB' 'Shmem: 2616 kB' 'KReclaimable: 216360 kB' 'Slab: 309608 kB' 'SReclaimable: 216360 kB' 'SUnreclaim: 93248 kB' 'KernelStack: 4532 kB' 'PageTables: 3328 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 642044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.653 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.654 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4881224 kB' 'MemAvailable: 9504436 kB' 'Buffers: 37756 kB' 'Cached: 4701984 kB' 'SwapCached: 0 kB' 'Active: 1240104 kB' 'Inactive: 3630148 kB' 'Active(anon): 139540 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1100564 kB' 'Inactive(file): 3628360 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 728 kB' 'Writeback: 0 kB' 'AnonPages: 148464 kB' 'Mapped: 73860 kB' 'Shmem: 2616 kB' 'KReclaimable: 216360 kB' 'Slab: 309608 kB' 'SReclaimable: 216360 kB' 'SUnreclaim: 93248 kB' 'KernelStack: 4532 kB' 'PageTables: 3328 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 642044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.655 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.656 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:00.657 nr_hugepages=1024 00:05:00.657 resv_hugepages=0 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.657 surplus_hugepages=0 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.657 anon_hugepages=0 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4881500 kB' 'MemAvailable: 9504712 kB' 'Buffers: 37756 kB' 'Cached: 4701984 kB' 'SwapCached: 0 kB' 'Active: 1240364 kB' 'Inactive: 3630148 kB' 'Active(anon): 139800 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1100564 kB' 'Inactive(file): 3628360 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 736 kB' 'Writeback: 0 kB' 'AnonPages: 148444 kB' 'Mapped: 73696 kB' 'Shmem: 2616 kB' 'KReclaimable: 216360 kB' 'Slab: 309608 kB' 'SReclaimable: 216360 kB' 'SUnreclaim: 93248 kB' 'KernelStack: 4552 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 646880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.657 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4881168 kB' 'MemUsed: 7369928 kB' 'Active: 1240104 kB' 'Inactive: 3630152 kB' 'Active(anon): 139540 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1100564 kB' 'Inactive(file): 3628364 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 736 kB' 'Writeback: 0 kB' 'FilePages: 4739744 kB' 'Mapped: 73696 kB' 'AnonPages: 148680 kB' 'Shmem: 2616 kB' 'KernelStack: 4532 kB' 'PageTables: 3352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 216360 kB' 'Slab: 309608 kB' 'SReclaimable: 216360 kB' 'SUnreclaim: 93248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.658 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.659 node0=1024 expecting 1024 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.659 00:05:00.659 real 0m1.128s 00:05:00.659 user 0m0.305s 00:05:00.659 sys 0m0.792s 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.659 08:31:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:00.659 ************************************ 00:05:00.660 END TEST default_setup 00:05:00.660 ************************************ 00:05:00.660 08:31:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:00.660 08:31:35 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:00.660 08:31:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.660 08:31:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.660 08:31:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.660 ************************************ 00:05:00.660 START TEST per_node_1G_alloc 00:05:00.660 ************************************ 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.660 08:31:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.919 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5929788 kB' 'MemAvailable: 10553020 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1239632 kB' 'Inactive: 3630092 kB' 'Active(anon): 139004 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100628 kB' 'Inactive(file): 3628300 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 740 kB' 'Writeback: 0 kB' 'AnonPages: 148520 kB' 'Mapped: 73632 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309328 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 92952 kB' 'KernelStack: 4528 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 648768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.489 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5930048 kB' 'MemAvailable: 10553280 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1240152 kB' 'Inactive: 3630092 kB' 'Active(anon): 139524 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100628 kB' 'Inactive(file): 3628300 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 740 kB' 'Writeback: 0 kB' 'AnonPages: 148652 kB' 'Mapped: 73632 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309328 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 92952 kB' 'KernelStack: 4596 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 654128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.490 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.491 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5930048 kB' 'MemAvailable: 10553280 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1240152 kB' 'Inactive: 3630092 kB' 'Active(anon): 139524 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100628 kB' 'Inactive(file): 3628300 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 740 kB' 'Writeback: 0 kB' 'AnonPages: 148784 kB' 'Mapped: 73632 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309328 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 92952 kB' 'KernelStack: 4596 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 648556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.492 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.493 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.494 nr_hugepages=512 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:01.494 resv_hugepages=0 00:05:01.494 surplus_hugepages=0 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.494 anon_hugepages=0 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5930308 kB' 'MemAvailable: 10553540 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1240412 kB' 'Inactive: 3630092 kB' 'Active(anon): 139784 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100628 kB' 'Inactive(file): 3628300 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 740 kB' 'Writeback: 0 kB' 'AnonPages: 149176 kB' 'Mapped: 73632 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309328 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 92952 kB' 'KernelStack: 4664 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 640604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.494 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.495 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5930828 kB' 'MemUsed: 6320268 kB' 'Active: 1240412 kB' 'Inactive: 3630092 kB' 'Active(anon): 139784 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100628 kB' 'Inactive(file): 3628300 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 740 kB' 'Writeback: 0 kB' 'FilePages: 4739744 kB' 'Mapped: 73632 kB' 'AnonPages: 149436 kB' 'Shmem: 2616 kB' 'KernelStack: 4596 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 216376 kB' 'Slab: 309328 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 92952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.496 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.497 node0=512 expecting 512 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:01.497 00:05:01.497 real 0m0.627s 00:05:01.497 user 0m0.217s 00:05:01.497 sys 0m0.444s 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.497 08:31:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:01.497 ************************************ 00:05:01.497 END TEST per_node_1G_alloc 00:05:01.497 ************************************ 00:05:01.497 08:31:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:01.497 08:31:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:01.497 08:31:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.497 08:31:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.497 08:31:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.497 ************************************ 00:05:01.497 START TEST even_2G_alloc 00:05:01.497 ************************************ 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.497 08:31:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:01.755 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4880136 kB' 'MemAvailable: 9503368 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1239808 kB' 'Inactive: 3630072 kB' 'Active(anon): 139160 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100648 kB' 'Inactive(file): 3628280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 748 kB' 'Writeback: 0 kB' 'AnonPages: 148208 kB' 'Mapped: 73620 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309656 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 93280 kB' 'KernelStack: 4496 kB' 'PageTables: 3404 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 646948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.324 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.325 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4880136 kB' 'MemAvailable: 9503368 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1239880 kB' 'Inactive: 3630072 kB' 'Active(anon): 139232 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100648 kB' 'Inactive(file): 3628280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 752 kB' 'Writeback: 0 kB' 'AnonPages: 148340 kB' 'Mapped: 73620 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309656 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 93280 kB' 'KernelStack: 4480 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 652668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.326 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.327 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4880144 kB' 'MemAvailable: 9503376 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1239960 kB' 'Inactive: 3630072 kB' 'Active(anon): 139312 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100648 kB' 'Inactive(file): 3628280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 752 kB' 'Writeback: 0 kB' 'AnonPages: 148420 kB' 'Mapped: 73620 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309656 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 93280 kB' 'KernelStack: 4464 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 652668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.328 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.329 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.330 nr_hugepages=1024 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.330 resv_hugepages=0 00:05:02.330 surplus_hugepages=0 00:05:02.330 anon_hugepages=0 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4880656 kB' 'MemAvailable: 9503888 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1239920 kB' 'Inactive: 3630072 kB' 'Active(anon): 139272 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100648 kB' 'Inactive(file): 3628280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 752 kB' 'Writeback: 0 kB' 'AnonPages: 149140 kB' 'Mapped: 73620 kB' 'Shmem: 2616 kB' 'KReclaimable: 216376 kB' 'Slab: 309656 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 93280 kB' 'KernelStack: 4516 kB' 'PageTables: 3320 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 651328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.330 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.331 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4880656 kB' 'MemUsed: 7370440 kB' 'Active: 1240180 kB' 'Inactive: 3630072 kB' 'Active(anon): 139532 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100648 kB' 'Inactive(file): 3628280 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 752 kB' 'Writeback: 0 kB' 'FilePages: 4739744 kB' 'Mapped: 73620 kB' 'AnonPages: 149012 kB' 'Shmem: 2616 kB' 'KernelStack: 4516 kB' 'PageTables: 3320 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 216376 kB' 'Slab: 309656 kB' 'SReclaimable: 216376 kB' 'SUnreclaim: 93280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.332 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.333 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.334 node0=1024 expecting 1024 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.334 00:05:02.334 real 0m0.882s 00:05:02.334 user 0m0.220s 00:05:02.334 sys 0m0.693s 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.334 08:31:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.334 ************************************ 00:05:02.334 END TEST even_2G_alloc 00:05:02.334 ************************************ 00:05:02.334 08:31:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.334 08:31:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:02.334 08:31:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.334 08:31:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.334 08:31:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.334 ************************************ 00:05:02.334 START TEST odd_alloc 00:05:02.334 ************************************ 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.334 08:31:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:02.592 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.163 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4879068 kB' 'MemAvailable: 9502316 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1240024 kB' 'Inactive: 3630052 kB' 'Active(anon): 139356 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100668 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 756 kB' 'Writeback: 0 kB' 'AnonPages: 149024 kB' 'Mapped: 73692 kB' 'Shmem: 2616 kB' 'KReclaimable: 216392 kB' 'Slab: 309240 kB' 'SReclaimable: 216392 kB' 'SUnreclaim: 92848 kB' 'KernelStack: 4528 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 648992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.164 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4878808 kB' 'MemAvailable: 9502056 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1240024 kB' 'Inactive: 3630052 kB' 'Active(anon): 139356 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100668 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 756 kB' 'Writeback: 0 kB' 'AnonPages: 149024 kB' 'Mapped: 73692 kB' 'Shmem: 2616 kB' 'KReclaimable: 216392 kB' 'Slab: 309240 kB' 'SReclaimable: 216392 kB' 'SUnreclaim: 92848 kB' 'KernelStack: 4528 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 642944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.165 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.166 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4879092 kB' 'MemAvailable: 9502340 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1239972 kB' 'Inactive: 3630052 kB' 'Active(anon): 139304 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100668 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 760 kB' 'Writeback: 0 kB' 'AnonPages: 148396 kB' 'Mapped: 73668 kB' 'Shmem: 2616 kB' 'KReclaimable: 216392 kB' 'Slab: 309424 kB' 'SReclaimable: 216392 kB' 'SUnreclaim: 93032 kB' 'KernelStack: 4496 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 642944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.167 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.168 nr_hugepages=1025 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:03.168 resv_hugepages=0 00:05:03.168 surplus_hugepages=0 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.168 anon_hugepages=0 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.168 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4879596 kB' 'MemAvailable: 9502844 kB' 'Buffers: 37756 kB' 'Cached: 4701988 kB' 'SwapCached: 0 kB' 'Active: 1240328 kB' 'Inactive: 3630052 kB' 'Active(anon): 139660 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100668 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 760 kB' 'Writeback: 0 kB' 'AnonPages: 148860 kB' 'Mapped: 73648 kB' 'Shmem: 2616 kB' 'KReclaimable: 216392 kB' 'Slab: 309440 kB' 'SReclaimable: 216392 kB' 'SUnreclaim: 93048 kB' 'KernelStack: 4580 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 647872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.169 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4879784 kB' 'MemUsed: 7371312 kB' 'Active: 1240068 kB' 'Inactive: 3630052 kB' 'Active(anon): 139400 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100668 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 760 kB' 'Writeback: 0 kB' 'FilePages: 4739744 kB' 'Mapped: 73648 kB' 'AnonPages: 148860 kB' 'Shmem: 2616 kB' 'KernelStack: 4580 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 216392 kB' 'Slab: 309440 kB' 'SReclaimable: 216392 kB' 'SUnreclaim: 93048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.170 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:03.171 node0=1025 expecting 1025 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:03.171 00:05:03.171 real 0m0.886s 00:05:03.171 user 0m0.293s 00:05:03.171 sys 0m0.626s 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.171 08:31:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.171 ************************************ 00:05:03.171 END TEST odd_alloc 00:05:03.171 ************************************ 00:05:03.430 08:31:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.430 08:31:38 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:03.430 08:31:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.430 08:31:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.430 08:31:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.430 ************************************ 00:05:03.430 START TEST custom_alloc 00:05:03.430 ************************************ 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.430 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:03.687 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5942460 kB' 'MemAvailable: 10565712 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1227024 kB' 'Inactive: 3630056 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100676 kB' 'Inactive(file): 3628264 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 135660 kB' 'Mapped: 72684 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309232 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92848 kB' 'KernelStack: 4360 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 618472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.947 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.948 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5942720 kB' 'MemAvailable: 10565972 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1226868 kB' 'Inactive: 3630056 kB' 'Active(anon): 126192 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100676 kB' 'Inactive(file): 3628264 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 135744 kB' 'Mapped: 72684 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309232 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92848 kB' 'KernelStack: 4344 kB' 'PageTables: 3256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 618472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.949 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5943192 kB' 'MemAvailable: 10566444 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1226896 kB' 'Inactive: 3630056 kB' 'Active(anon): 126220 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100676 kB' 'Inactive(file): 3628264 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 135432 kB' 'Mapped: 72668 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309128 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92744 kB' 'KernelStack: 4364 kB' 'PageTables: 3164 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 624048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.950 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.951 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.952 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:03.953 nr_hugepages=512 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.953 resv_hugepages=0 00:05:03.953 surplus_hugepages=0 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.953 anon_hugepages=0 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5943128 kB' 'MemAvailable: 10566380 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1227156 kB' 'Inactive: 3630056 kB' 'Active(anon): 126480 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100676 kB' 'Inactive(file): 3628264 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 135952 kB' 'Mapped: 72668 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309128 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92744 kB' 'KernelStack: 4364 kB' 'PageTables: 3164 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 616524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.953 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5943428 kB' 'MemUsed: 6307668 kB' 'Active: 1226888 kB' 'Inactive: 3630056 kB' 'Active(anon): 126212 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100676 kB' 'Inactive(file): 3628264 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 4739756 kB' 'Mapped: 72668 kB' 'AnonPages: 135448 kB' 'Shmem: 2616 kB' 'KernelStack: 4324 kB' 'PageTables: 2888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 216384 kB' 'Slab: 309292 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.954 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.955 node0=512 expecting 512 00:05:03.955 ************************************ 00:05:03.955 END TEST custom_alloc 00:05:03.955 ************************************ 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.955 00:05:03.955 real 0m0.658s 00:05:03.955 user 0m0.255s 00:05:03.955 sys 0m0.436s 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.955 08:31:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.955 08:31:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.955 08:31:39 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:03.955 08:31:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.955 08:31:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.955 08:31:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.955 ************************************ 00:05:03.955 START TEST no_shrink_alloc 00:05:03.955 ************************************ 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.955 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:03.956 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.956 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:03.956 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:03.956 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:03.956 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.956 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:04.213 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4895484 kB' 'MemAvailable: 9518736 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1227020 kB' 'Inactive: 3630060 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100672 kB' 'Inactive(file): 3628268 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 136028 kB' 'Mapped: 73272 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309260 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92876 kB' 'KernelStack: 4316 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 602748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14036 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.826 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.827 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4895752 kB' 'MemAvailable: 9519004 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1226556 kB' 'Inactive: 3630060 kB' 'Active(anon): 125884 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100672 kB' 'Inactive(file): 3628268 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 135672 kB' 'Mapped: 73224 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309260 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92876 kB' 'KernelStack: 4268 kB' 'PageTables: 3440 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 614308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14052 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.828 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4896028 kB' 'MemAvailable: 9519280 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1226756 kB' 'Inactive: 3630060 kB' 'Active(anon): 126084 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100672 kB' 'Inactive(file): 3628268 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 135208 kB' 'Mapped: 73176 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309260 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92876 kB' 'KernelStack: 4240 kB' 'PageTables: 3288 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 614308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.829 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.830 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.831 nr_hugepages=1024 00:05:04.831 resv_hugepages=0 00:05:04.831 surplus_hugepages=0 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.831 anon_hugepages=0 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4896304 kB' 'MemAvailable: 9519556 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1226984 kB' 'Inactive: 3630060 kB' 'Active(anon): 126312 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100672 kB' 'Inactive(file): 3628268 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 135676 kB' 'Mapped: 73176 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309260 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92876 kB' 'KernelStack: 4276 kB' 'PageTables: 3248 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 614300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.831 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.832 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4896516 kB' 'MemUsed: 7354580 kB' 'Active: 1227044 kB' 'Inactive: 3630060 kB' 'Active(anon): 126372 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100672 kB' 'Inactive(file): 3628268 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 4739756 kB' 'Mapped: 73176 kB' 'AnonPages: 135348 kB' 'Shmem: 2616 kB' 'KernelStack: 4328 kB' 'PageTables: 3220 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 216384 kB' 'Slab: 309260 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.833 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.834 node0=1024 expecting 1024 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.834 08:31:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.355 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.355 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4895884 kB' 'MemAvailable: 9519136 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1227708 kB' 'Inactive: 3630052 kB' 'Active(anon): 127028 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100680 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 136328 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309100 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92716 kB' 'KernelStack: 4432 kB' 'PageTables: 3092 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 621132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.355 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4896144 kB' 'MemAvailable: 9519396 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1227708 kB' 'Inactive: 3630052 kB' 'Active(anon): 127028 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100680 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 136588 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309100 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92716 kB' 'KernelStack: 4432 kB' 'PageTables: 3092 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 621132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.356 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.357 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4896404 kB' 'MemAvailable: 9519656 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1227448 kB' 'Inactive: 3630052 kB' 'Active(anon): 126768 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100680 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 136332 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309100 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92716 kB' 'KernelStack: 4432 kB' 'PageTables: 3092 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 621132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.358 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.359 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.360 nr_hugepages=1024 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:05.360 resv_hugepages=0 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.360 surplus_hugepages=0 00:05:05.360 anon_hugepages=0 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4896444 kB' 'MemAvailable: 9519696 kB' 'Buffers: 37764 kB' 'Cached: 4701992 kB' 'SwapCached: 0 kB' 'Active: 1226848 kB' 'Inactive: 3630052 kB' 'Active(anon): 126168 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100680 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 135660 kB' 'Mapped: 72784 kB' 'Shmem: 2616 kB' 'KReclaimable: 216384 kB' 'Slab: 309100 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92716 kB' 'KernelStack: 4456 kB' 'PageTables: 2948 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 614436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8688 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.360 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.361 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4896152 kB' 'MemUsed: 7354944 kB' 'Active: 1227028 kB' 'Inactive: 3630052 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1100680 kB' 'Inactive(file): 3628260 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 4739756 kB' 'Mapped: 72688 kB' 'AnonPages: 135304 kB' 'Shmem: 2616 kB' 'KernelStack: 4392 kB' 'PageTables: 2852 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 216384 kB' 'Slab: 309136 kB' 'SReclaimable: 216384 kB' 'SUnreclaim: 92752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.362 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:05.363 node0=1024 expecting 1024 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:05.363 00:05:05.363 real 0m1.311s 00:05:05.363 user 0m0.490s 00:05:05.363 sys 0m0.894s 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.363 ************************************ 00:05:05.363 08:31:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 END TEST no_shrink_alloc 00:05:05.363 ************************************ 00:05:05.363 08:31:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:05.363 08:31:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:05.363 00:05:05.363 real 0m5.931s 00:05:05.363 user 0m1.998s 00:05:05.363 sys 0m4.091s 00:05:05.363 08:31:40 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.363 ************************************ 00:05:05.363 END TEST hugepages 00:05:05.363 ************************************ 00:05:05.363 08:31:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 08:31:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:05.363 08:31:40 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:05.363 08:31:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.363 08:31:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.363 08:31:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.363 ************************************ 00:05:05.363 START TEST driver 00:05:05.363 ************************************ 00:05:05.363 08:31:40 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:05.621 * Looking for test storage... 00:05:05.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.621 08:31:40 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:05.621 08:31:40 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.621 08:31:40 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.879 08:31:40 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:05.879 08:31:40 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.879 08:31:40 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.879 08:31:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.879 ************************************ 00:05:05.879 START TEST guess_driver 00:05:05.879 ************************************ 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:05:05.879 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:05.879 Looking for driver=uio_pci_generic 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.879 08:31:40 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.446 08:31:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:06.446 08:31:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:06.446 08:31:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.446 08:31:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.446 08:31:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:06.446 08:31:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.821 08:31:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:07.821 08:31:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:07.821 08:31:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.821 08:31:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.386 ************************************ 00:05:08.387 END TEST guess_driver 00:05:08.387 ************************************ 00:05:08.387 00:05:08.387 real 0m2.280s 00:05:08.387 user 0m0.449s 00:05:08.387 sys 0m1.787s 00:05:08.387 08:31:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.387 08:31:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 08:31:43 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:08.387 00:05:08.387 real 0m2.820s 00:05:08.387 user 0m0.714s 00:05:08.387 sys 0m2.074s 00:05:08.387 08:31:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.387 ************************************ 00:05:08.387 END TEST driver 00:05:08.387 ************************************ 00:05:08.387 08:31:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 08:31:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:08.387 08:31:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:08.387 08:31:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.387 08:31:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.387 08:31:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 ************************************ 00:05:08.387 START TEST devices 00:05:08.387 ************************************ 00:05:08.387 08:31:43 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:08.387 * Looking for test storage... 00:05:08.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:08.387 08:31:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:08.387 08:31:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:08.387 08:31:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.387 08:31:43 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:08.952 08:31:43 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:08.952 No valid GPT data, bailing 00:05:08.952 08:31:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:08.952 08:31:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:08.952 08:31:43 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:08.952 08:31:43 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.952 08:31:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.952 ************************************ 00:05:08.952 START TEST nvme_mount 00:05:08.952 ************************************ 00:05:08.952 08:31:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:08.952 08:31:43 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:08.952 08:31:43 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:08.952 08:31:43 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.952 08:31:43 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.952 08:31:43 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:08.953 08:31:43 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:09.884 Creating new GPT entries in memory. 00:05:09.884 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:09.884 other utilities. 00:05:09.884 08:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:09.884 08:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.884 08:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.884 08:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.884 08:31:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:11.255 Creating new GPT entries in memory. 00:05:11.255 The operation has completed successfully. 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 103619 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:11.255 08:31:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.665 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.665 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:12.665 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:12.665 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:12.665 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.665 08:31:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.041 08:31:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.041 08:31:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.420 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.420 00:05:15.420 real 0m6.306s 00:05:15.420 user 0m0.692s 00:05:15.420 sys 0m3.504s 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.420 ************************************ 00:05:15.420 END TEST nvme_mount 00:05:15.420 ************************************ 00:05:15.420 08:31:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:15.420 08:31:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:15.420 08:31:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:15.420 08:31:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.420 08:31:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.420 08:31:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.420 ************************************ 00:05:15.420 START TEST dm_mount 00:05:15.420 ************************************ 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:15.420 08:31:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:16.353 Creating new GPT entries in memory. 00:05:16.353 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.353 other utilities. 00:05:16.353 08:31:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.353 08:31:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.353 08:31:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.353 08:31:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.353 08:31:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:17.286 Creating new GPT entries in memory. 00:05:17.286 The operation has completed successfully. 00:05:17.286 08:31:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.286 08:31:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.286 08:31:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.286 08:31:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.286 08:31:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:18.657 The operation has completed successfully. 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 104100 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:18.657 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:18.658 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.916 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:18.916 08:31:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.849 08:31:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:20.106 08:31:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:21.489 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:21.489 ************************************ 00:05:21.489 END TEST dm_mount 00:05:21.489 ************************************ 00:05:21.489 00:05:21.489 real 0m6.024s 00:05:21.489 user 0m0.466s 00:05:21.489 sys 0m2.305s 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.489 08:31:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:21.489 08:31:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.489 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.489 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.489 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:21.489 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.489 08:31:56 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:21.489 ************************************ 00:05:21.489 END TEST devices 00:05:21.489 ************************************ 00:05:21.489 00:05:21.489 real 0m13.137s 00:05:21.489 user 0m1.566s 00:05:21.489 sys 0m6.139s 00:05:21.489 08:31:56 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.489 08:31:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.489 08:31:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:21.489 ************************************ 00:05:21.489 END TEST setup.sh 00:05:21.489 ************************************ 00:05:21.489 00:05:21.489 real 0m27.238s 00:05:21.489 user 0m6.058s 00:05:21.489 sys 0m15.938s 00:05:21.489 08:31:56 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.489 08:31:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:21.489 08:31:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.489 08:31:56 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:21.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:21.747 Hugepages 00:05:21.747 node hugesize free / total 00:05:21.747 node0 1048576kB 0 / 0 00:05:21.747 node0 2048kB 2048 / 2048 00:05:21.747 00:05:21.747 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.005 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:22.005 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:22.005 08:31:57 -- spdk/autotest.sh@130 -- # uname -s 00:05:22.005 08:31:57 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:22.005 08:31:57 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:22.005 08:31:57 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:22.572 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.507 08:31:58 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:24.880 08:31:59 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:24.880 08:31:59 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:24.880 08:31:59 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.880 08:31:59 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:24.880 08:31:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.880 08:31:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.880 08:31:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.880 08:31:59 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.880 08:31:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.880 08:31:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.880 08:31:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:24.880 08:31:59 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:24.880 Waiting for block devices as requested 00:05:24.880 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.139 08:32:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:25.139 08:32:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:25.139 08:32:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:25.139 08:32:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:25.139 08:32:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:25.139 08:32:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:25.139 08:32:00 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:25.139 08:32:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:25.139 08:32:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:25.139 08:32:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:25.139 08:32:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:25.139 08:32:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:25.139 08:32:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:25.139 08:32:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:25.139 08:32:00 -- common/autotest_common.sh@1557 -- # continue 00:05:25.139 08:32:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:25.139 08:32:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.139 08:32:00 -- common/autotest_common.sh@10 -- # set +x 00:05:25.139 08:32:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:25.139 08:32:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.139 08:32:00 -- common/autotest_common.sh@10 -- # set +x 00:05:25.139 08:32:00 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:25.655 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.587 08:32:01 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:26.587 08:32:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.587 08:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:26.587 08:32:01 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:26.587 08:32:01 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:26.587 08:32:01 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.587 08:32:01 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:26.587 08:32:01 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:26.587 08:32:01 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:26.587 08:32:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:26.587 08:32:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:26.587 08:32:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.587 08:32:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.587 08:32:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:26.845 08:32:01 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:26.845 08:32:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:26.845 08:32:01 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:26.845 08:32:01 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:26.845 08:32:01 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:26.845 08:32:01 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.845 08:32:01 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:26.845 08:32:01 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:26.845 08:32:01 -- common/autotest_common.sh@1593 -- # return 0 00:05:26.845 08:32:01 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:26.845 08:32:01 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:26.845 08:32:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.845 08:32:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.845 08:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:26.845 ************************************ 00:05:26.845 START TEST unittest 00:05:26.845 ************************************ 00:05:26.845 08:32:01 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:26.845 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:26.845 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:26.845 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:26.846 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:26.846 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:26.846 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:26.846 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:26.846 ++ rpc_py=rpc_cmd 00:05:26.846 ++ set -e 00:05:26.846 ++ shopt -s nullglob 00:05:26.846 ++ shopt -s extglob 00:05:26.846 ++ shopt -s inherit_errexit 00:05:26.846 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:26.846 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:26.846 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:26.846 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:26.846 +++ CONFIG_FIO_PLUGIN=y 00:05:26.846 +++ CONFIG_NVME_CUSE=y 00:05:26.846 +++ CONFIG_RAID5F=y 00:05:26.846 +++ CONFIG_LTO=n 00:05:26.846 +++ CONFIG_SMA=n 00:05:26.846 +++ CONFIG_ISAL=y 00:05:26.846 +++ CONFIG_OPENSSL_PATH= 00:05:26.846 +++ CONFIG_IDXD_KERNEL=n 00:05:26.846 +++ CONFIG_URING_PATH= 00:05:26.846 +++ CONFIG_DAOS=n 00:05:26.846 +++ CONFIG_DPDK_LIB_DIR= 00:05:26.846 +++ CONFIG_OCF=n 00:05:26.846 +++ CONFIG_EXAMPLES=y 00:05:26.846 +++ CONFIG_RDMA_PROV=verbs 00:05:26.846 +++ CONFIG_ISCSI_INITIATOR=y 00:05:26.846 +++ CONFIG_VTUNE=n 00:05:26.846 +++ CONFIG_DPDK_INC_DIR= 00:05:26.846 +++ CONFIG_CET=n 00:05:26.846 +++ CONFIG_TESTS=y 00:05:26.846 +++ CONFIG_APPS=y 00:05:26.846 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:26.846 +++ CONFIG_DAOS_DIR= 00:05:26.846 +++ CONFIG_CRYPTO_MLX5=n 00:05:26.846 +++ CONFIG_XNVME=n 00:05:26.846 +++ CONFIG_UNIT_TESTS=y 00:05:26.846 +++ CONFIG_FUSE=n 00:05:26.846 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:26.846 +++ CONFIG_OCF_PATH= 00:05:26.846 +++ CONFIG_WPDK_DIR= 00:05:26.846 +++ CONFIG_VFIO_USER=n 00:05:26.846 +++ CONFIG_MAX_LCORES=128 00:05:26.846 +++ CONFIG_ARCH=native 00:05:26.846 +++ CONFIG_TSAN=n 00:05:26.846 +++ CONFIG_VIRTIO=y 00:05:26.846 +++ CONFIG_HAVE_EVP_MAC=n 00:05:26.846 +++ CONFIG_IPSEC_MB=n 00:05:26.846 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:26.846 +++ CONFIG_DPDK_UADK=n 00:05:26.846 +++ CONFIG_ASAN=y 00:05:26.846 +++ CONFIG_SHARED=n 00:05:26.846 +++ CONFIG_VTUNE_DIR= 00:05:26.846 +++ CONFIG_RDMA_SET_TOS=y 00:05:26.846 +++ CONFIG_VBDEV_COMPRESS=n 00:05:26.846 +++ CONFIG_VFIO_USER_DIR= 00:05:26.846 +++ CONFIG_PGO_DIR= 00:05:26.846 +++ CONFIG_FUZZER_LIB= 00:05:26.846 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:26.846 +++ CONFIG_USDT=n 00:05:26.846 +++ CONFIG_HAVE_KEYUTILS=y 00:05:26.846 +++ CONFIG_URING_ZNS=n 00:05:26.846 +++ CONFIG_FC_PATH= 00:05:26.846 +++ CONFIG_COVERAGE=y 00:05:26.846 +++ CONFIG_CUSTOMOCF=n 00:05:26.846 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:26.846 +++ CONFIG_WERROR=y 00:05:26.846 +++ CONFIG_DEBUG=y 00:05:26.846 +++ CONFIG_RDMA=y 00:05:26.846 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:26.846 +++ CONFIG_FUZZER=n 00:05:26.846 +++ CONFIG_FC=n 00:05:26.846 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:26.846 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:26.846 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:26.846 +++ CONFIG_CROSS_PREFIX= 00:05:26.846 +++ CONFIG_PREFIX=/usr/local 00:05:26.846 +++ CONFIG_HAVE_LIBBSD=n 00:05:26.846 +++ CONFIG_UBSAN=y 00:05:26.846 +++ CONFIG_PGO_CAPTURE=n 00:05:26.846 +++ CONFIG_UBLK=n 00:05:26.846 +++ CONFIG_ISAL_CRYPTO=y 00:05:26.846 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:26.846 +++ CONFIG_CRYPTO=n 00:05:26.846 +++ CONFIG_RBD=n 00:05:26.846 +++ CONFIG_LIBDIR= 00:05:26.846 +++ CONFIG_IPSEC_MB_DIR= 00:05:26.846 +++ CONFIG_PGO_USE=n 00:05:26.846 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:26.846 +++ CONFIG_GOLANG=n 00:05:26.846 +++ CONFIG_VHOST=y 00:05:26.846 +++ CONFIG_IDXD=y 00:05:26.846 +++ CONFIG_AVAHI=n 00:05:26.846 +++ CONFIG_URING=n 00:05:26.846 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:26.846 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:26.846 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:26.846 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:26.846 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:26.846 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:26.846 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:26.846 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:26.846 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:26.846 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:26.846 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:26.846 +++ VHOST_APP=("$_app_dir/vhost") 00:05:26.846 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:26.846 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:26.846 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:26.846 +++ [[ #ifndef SPDK_CONFIG_H 00:05:26.846 #define SPDK_CONFIG_H 00:05:26.846 #define SPDK_CONFIG_APPS 1 00:05:26.846 #define SPDK_CONFIG_ARCH native 00:05:26.846 #define SPDK_CONFIG_ASAN 1 00:05:26.846 #undef SPDK_CONFIG_AVAHI 00:05:26.846 #undef SPDK_CONFIG_CET 00:05:26.846 #define SPDK_CONFIG_COVERAGE 1 00:05:26.846 #define SPDK_CONFIG_CROSS_PREFIX 00:05:26.846 #undef SPDK_CONFIG_CRYPTO 00:05:26.846 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:26.846 #undef SPDK_CONFIG_CUSTOMOCF 00:05:26.846 #undef SPDK_CONFIG_DAOS 00:05:26.846 #define SPDK_CONFIG_DAOS_DIR 00:05:26.846 #define SPDK_CONFIG_DEBUG 1 00:05:26.846 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:26.846 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:26.846 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:26.846 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:26.846 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:26.846 #undef SPDK_CONFIG_DPDK_UADK 00:05:26.846 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:26.846 #define SPDK_CONFIG_EXAMPLES 1 00:05:26.846 #undef SPDK_CONFIG_FC 00:05:26.846 #define SPDK_CONFIG_FC_PATH 00:05:26.846 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:26.846 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:26.846 #undef SPDK_CONFIG_FUSE 00:05:26.846 #undef SPDK_CONFIG_FUZZER 00:05:26.846 #define SPDK_CONFIG_FUZZER_LIB 00:05:26.846 #undef SPDK_CONFIG_GOLANG 00:05:26.846 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:26.846 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:05:26.846 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:26.846 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:26.846 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:26.846 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:26.846 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:26.846 #define SPDK_CONFIG_IDXD 1 00:05:26.846 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:26.846 #undef SPDK_CONFIG_IPSEC_MB 00:05:26.846 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:26.846 #define SPDK_CONFIG_ISAL 1 00:05:26.846 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:26.846 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:26.846 #define SPDK_CONFIG_LIBDIR 00:05:26.846 #undef SPDK_CONFIG_LTO 00:05:26.846 #define SPDK_CONFIG_MAX_LCORES 128 00:05:26.846 #define SPDK_CONFIG_NVME_CUSE 1 00:05:26.846 #undef SPDK_CONFIG_OCF 00:05:26.846 #define SPDK_CONFIG_OCF_PATH 00:05:26.846 #define SPDK_CONFIG_OPENSSL_PATH 00:05:26.846 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:26.846 #define SPDK_CONFIG_PGO_DIR 00:05:26.846 #undef SPDK_CONFIG_PGO_USE 00:05:26.846 #define SPDK_CONFIG_PREFIX /usr/local 00:05:26.846 #define SPDK_CONFIG_RAID5F 1 00:05:26.846 #undef SPDK_CONFIG_RBD 00:05:26.846 #define SPDK_CONFIG_RDMA 1 00:05:26.846 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:26.846 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:26.846 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:26.846 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:26.846 #undef SPDK_CONFIG_SHARED 00:05:26.846 #undef SPDK_CONFIG_SMA 00:05:26.846 #define SPDK_CONFIG_TESTS 1 00:05:26.846 #undef SPDK_CONFIG_TSAN 00:05:26.846 #undef SPDK_CONFIG_UBLK 00:05:26.846 #define SPDK_CONFIG_UBSAN 1 00:05:26.846 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:26.846 #undef SPDK_CONFIG_URING 00:05:26.846 #define SPDK_CONFIG_URING_PATH 00:05:26.846 #undef SPDK_CONFIG_URING_ZNS 00:05:26.846 #undef SPDK_CONFIG_USDT 00:05:26.846 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:26.846 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:26.846 #undef SPDK_CONFIG_VFIO_USER 00:05:26.846 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:26.846 #define SPDK_CONFIG_VHOST 1 00:05:26.846 #define SPDK_CONFIG_VIRTIO 1 00:05:26.846 #undef SPDK_CONFIG_VTUNE 00:05:26.846 #define SPDK_CONFIG_VTUNE_DIR 00:05:26.846 #define SPDK_CONFIG_WERROR 1 00:05:26.846 #define SPDK_CONFIG_WPDK_DIR 00:05:26.846 #undef SPDK_CONFIG_XNVME 00:05:26.846 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:26.846 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:26.846 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.846 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:26.846 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.846 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.846 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:26.846 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:26.846 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:26.846 ++++ export PATH 00:05:26.846 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:26.846 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:26.846 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:26.846 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:26.846 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:26.846 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:26.846 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:26.846 +++ TEST_TAG=N/A 00:05:26.846 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:26.846 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:26.846 ++++ uname -s 00:05:26.846 +++ PM_OS=Linux 00:05:26.846 +++ MONITOR_RESOURCES_SUDO=() 00:05:26.846 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:26.846 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:26.847 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:26.847 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:26.847 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:26.847 +++ SUDO[0]= 00:05:26.847 +++ SUDO[1]='sudo -E' 00:05:26.847 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:26.847 +++ [[ Linux == FreeBSD ]] 00:05:26.847 +++ [[ Linux == Linux ]] 00:05:26.847 +++ [[ QEMU != QEMU ]] 00:05:26.847 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:26.847 ++ : 0 00:05:26.847 ++ export RUN_NIGHTLY 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_RUN_VALGRIND 00:05:26.847 ++ : 1 00:05:26.847 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:26.847 ++ : 1 00:05:26.847 ++ export SPDK_TEST_UNITTEST 00:05:26.847 ++ : 00:05:26.847 ++ export SPDK_TEST_AUTOBUILD 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_RELEASE_BUILD 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_ISAL 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_ISCSI 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:26.847 ++ : 1 00:05:26.847 ++ export SPDK_TEST_NVME 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_NVME_PMR 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_NVME_BP 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_NVME_CLI 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_NVME_CUSE 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_NVME_FDP 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_NVMF 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_VFIOUSER 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_FUZZER 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_FUZZER_SHORT 00:05:26.847 ++ : rdma 00:05:26.847 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_RBD 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_VHOST 00:05:26.847 ++ : 1 00:05:26.847 ++ export SPDK_TEST_BLOCKDEV 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_IOAT 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_BLOBFS 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_VHOST_INIT 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_LVOL 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:26.847 ++ : 1 00:05:26.847 ++ export SPDK_RUN_ASAN 00:05:26.847 ++ : 1 00:05:26.847 ++ export SPDK_RUN_UBSAN 00:05:26.847 ++ : 00:05:26.847 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_RUN_NON_ROOT 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_CRYPTO 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_FTL 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_OCF 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_VMD 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_OPAL 00:05:26.847 ++ : 00:05:26.847 ++ export SPDK_TEST_NATIVE_DPDK 00:05:26.847 ++ : true 00:05:26.847 ++ export SPDK_AUTOTEST_X 00:05:26.847 ++ : 1 00:05:26.847 ++ export SPDK_TEST_RAID5 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_URING 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_USDT 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_USE_IGB_UIO 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_SCHEDULER 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_SCANBUILD 00:05:26.847 ++ : 00:05:26.847 ++ export SPDK_TEST_NVMF_NICS 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_SMA 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_DAOS 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_XNVME 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_ACCEL_DSA 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_ACCEL_IAA 00:05:26.847 ++ : 00:05:26.847 ++ export SPDK_TEST_FUZZER_TARGET 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_TEST_NVMF_MDNS 00:05:26.847 ++ : 0 00:05:26.847 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:26.847 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:26.847 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:26.847 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:26.847 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:26.847 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:26.847 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:26.847 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:26.847 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:26.847 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:26.847 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:26.847 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:26.847 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:26.847 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:26.847 ++ PYTHONDONTWRITEBYTECODE=1 00:05:26.847 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:26.847 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:26.847 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:26.847 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:26.847 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:26.847 ++ rm -rf /var/tmp/asan_suppression_file 00:05:26.847 ++ cat 00:05:26.847 ++ echo leak:libfuse3.so 00:05:26.847 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:26.847 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:26.847 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:26.847 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:26.847 ++ '[' -z /var/spdk/dependencies ']' 00:05:26.847 ++ export DEPENDENCY_DIR 00:05:26.847 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:26.847 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:26.847 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:26.847 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:26.847 ++ export QEMU_BIN= 00:05:26.847 ++ QEMU_BIN= 00:05:26.847 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:26.847 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:26.847 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:26.847 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:26.847 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:26.847 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:26.847 ++ '[' 0 -eq 0 ']' 00:05:26.847 ++ export valgrind= 00:05:26.847 ++ valgrind= 00:05:26.847 +++ uname -s 00:05:26.847 ++ '[' Linux = Linux ']' 00:05:26.847 ++ HUGEMEM=4096 00:05:26.847 ++ export CLEAR_HUGE=yes 00:05:26.847 ++ CLEAR_HUGE=yes 00:05:26.847 ++ [[ 0 -eq 1 ]] 00:05:26.847 ++ [[ 0 -eq 1 ]] 00:05:26.847 ++ MAKE=make 00:05:26.847 +++ nproc 00:05:26.847 ++ MAKEFLAGS=-j10 00:05:26.847 ++ export HUGEMEM=4096 00:05:26.847 ++ HUGEMEM=4096 00:05:26.847 ++ NO_HUGE=() 00:05:26.847 ++ TEST_MODE= 00:05:26.847 ++ [[ -z '' ]] 00:05:26.847 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:26.847 ++ exec 00:05:26.847 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:26.847 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:26.847 ++ set_test_storage 2147483648 00:05:26.847 ++ [[ -v testdir ]] 00:05:26.847 ++ local requested_size=2147483648 00:05:26.847 ++ local mount target_dir 00:05:26.847 ++ local -A mounts fss sizes avails uses 00:05:26.847 ++ local source fs size avail mount use 00:05:26.847 ++ local storage_fallback storage_candidates 00:05:26.847 +++ mktemp -udt spdk.XXXXXX 00:05:26.847 ++ storage_fallback=/tmp/spdk.BhRDDT 00:05:26.847 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:26.847 ++ [[ -n '' ]] 00:05:26.847 ++ [[ -n '' ]] 00:05:26.847 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.BhRDDT/tests/unit /tmp/spdk.BhRDDT 00:05:26.847 ++ requested_size=2214592512 00:05:26.847 ++ read -r source fs size use avail _ mount 00:05:26.847 +++ df -T 00:05:26.847 +++ grep -v Filesystem 00:05:26.847 ++ mounts["$mount"]=udev 00:05:26.847 ++ fss["$mount"]=devtmpfs 00:05:26.847 ++ avails["$mount"]=6224461824 00:05:26.847 ++ sizes["$mount"]=6224461824 00:05:26.847 ++ uses["$mount"]=0 00:05:26.847 ++ read -r source fs size use avail _ mount 00:05:26.847 ++ mounts["$mount"]=tmpfs 00:05:26.847 ++ fss["$mount"]=tmpfs 00:05:26.847 ++ avails["$mount"]=1253408768 00:05:26.847 ++ sizes["$mount"]=1254514688 00:05:26.847 ++ uses["$mount"]=1105920 00:05:26.847 ++ read -r source fs size use avail _ mount 00:05:26.847 ++ mounts["$mount"]=/dev/vda1 00:05:26.847 ++ fss["$mount"]=ext4 00:05:26.847 ++ avails["$mount"]=10433949696 00:05:26.847 ++ sizes["$mount"]=20616794112 00:05:26.847 ++ uses["$mount"]=10166067200 00:05:26.847 ++ read -r source fs size use avail _ mount 00:05:26.847 ++ mounts["$mount"]=tmpfs 00:05:26.847 ++ fss["$mount"]=tmpfs 00:05:26.847 ++ avails["$mount"]=6272561152 00:05:26.847 ++ sizes["$mount"]=6272561152 00:05:26.847 ++ uses["$mount"]=0 00:05:26.847 ++ read -r source fs size use avail _ mount 00:05:26.847 ++ mounts["$mount"]=tmpfs 00:05:26.847 ++ fss["$mount"]=tmpfs 00:05:26.847 ++ avails["$mount"]=5242880 00:05:26.847 ++ sizes["$mount"]=5242880 00:05:26.847 ++ uses["$mount"]=0 00:05:26.847 ++ read -r source fs size use avail _ mount 00:05:26.847 ++ mounts["$mount"]=tmpfs 00:05:26.847 ++ fss["$mount"]=tmpfs 00:05:26.847 ++ avails["$mount"]=6272561152 00:05:26.847 ++ sizes["$mount"]=6272561152 00:05:26.847 ++ uses["$mount"]=0 00:05:26.847 ++ read -r source fs size use avail _ mount 00:05:26.847 ++ mounts["$mount"]=/dev/loop0 00:05:26.848 ++ fss["$mount"]=squashfs 00:05:26.848 ++ avails["$mount"]=0 00:05:26.848 ++ sizes["$mount"]=67108864 00:05:26.848 ++ uses["$mount"]=67108864 00:05:26.848 ++ read -r source fs size use avail _ mount 00:05:26.848 ++ mounts["$mount"]=/dev/vda15 00:05:26.848 ++ fss["$mount"]=vfat 00:05:26.848 ++ avails["$mount"]=103089152 00:05:26.848 ++ sizes["$mount"]=109422592 00:05:26.848 ++ uses["$mount"]=6334464 00:05:26.848 ++ read -r source fs size use avail _ mount 00:05:26.848 ++ mounts["$mount"]=/dev/loop2 00:05:26.848 ++ fss["$mount"]=squashfs 00:05:26.848 ++ avails["$mount"]=0 00:05:26.848 ++ sizes["$mount"]=41025536 00:05:26.848 ++ uses["$mount"]=41025536 00:05:26.848 ++ read -r source fs size use avail _ mount 00:05:26.848 ++ mounts["$mount"]=/dev/loop1 00:05:26.848 ++ fss["$mount"]=squashfs 00:05:26.848 ++ avails["$mount"]=0 00:05:26.848 ++ sizes["$mount"]=96337920 00:05:26.848 ++ uses["$mount"]=96337920 00:05:26.848 ++ read -r source fs size use avail _ mount 00:05:26.848 ++ mounts["$mount"]=tmpfs 00:05:26.848 ++ fss["$mount"]=tmpfs 00:05:26.848 ++ avails["$mount"]=1254510592 00:05:26.848 ++ sizes["$mount"]=1254510592 00:05:26.848 ++ uses["$mount"]=0 00:05:26.848 ++ read -r source fs size use avail _ mount 00:05:26.848 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:05:26.848 ++ fss["$mount"]=fuse.sshfs 00:05:26.848 ++ avails["$mount"]=94900039680 00:05:26.848 ++ sizes["$mount"]=105088212992 00:05:26.848 ++ uses["$mount"]=4802740224 00:05:26.848 ++ read -r source fs size use avail _ mount 00:05:26.848 ++ printf '* Looking for test storage...\n' 00:05:26.848 * Looking for test storage... 00:05:26.848 ++ local target_space new_size 00:05:26.848 ++ for target_dir in "${storage_candidates[@]}" 00:05:26.848 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:26.848 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:26.848 ++ mount=/ 00:05:26.848 ++ target_space=10433949696 00:05:26.848 ++ (( target_space == 0 || target_space < requested_size )) 00:05:26.848 ++ (( target_space >= requested_size )) 00:05:26.848 ++ [[ ext4 == tmpfs ]] 00:05:26.848 ++ [[ ext4 == ramfs ]] 00:05:26.848 ++ [[ / == / ]] 00:05:26.848 ++ new_size=12380659712 00:05:26.848 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:26.848 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:26.848 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:26.848 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:26.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:26.848 ++ return 0 00:05:26.848 ++ set -o errtrace 00:05:26.848 ++ shopt -s extdebug 00:05:26.848 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:26.848 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@1687 -- # true 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@29 -- # exec 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:26.848 08:32:01 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:05:26.848 --rc lcov_branch_coverage=1 00:05:26.848 --rc lcov_function_coverage=1 00:05:26.848 --rc genhtml_branch_coverage=1 00:05:26.848 --rc genhtml_function_coverage=1 00:05:26.848 --rc genhtml_legend=1 00:05:26.848 --rc geninfo_all_blocks=1 00:05:26.848 ' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:05:26.848 --rc lcov_branch_coverage=1 00:05:26.848 --rc lcov_function_coverage=1 00:05:26.848 --rc genhtml_branch_coverage=1 00:05:26.848 --rc genhtml_function_coverage=1 00:05:26.848 --rc genhtml_legend=1 00:05:26.848 --rc geninfo_all_blocks=1 00:05:26.848 ' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:05:26.848 --rc lcov_branch_coverage=1 00:05:26.848 --rc lcov_function_coverage=1 00:05:26.848 --rc genhtml_branch_coverage=1 00:05:26.848 --rc genhtml_function_coverage=1 00:05:26.848 --rc genhtml_legend=1 00:05:26.848 --rc geninfo_all_blocks=1 00:05:26.848 --no-external' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:05:26.848 --rc lcov_branch_coverage=1 00:05:26.848 --rc lcov_function_coverage=1 00:05:26.848 --rc genhtml_branch_coverage=1 00:05:26.848 --rc genhtml_function_coverage=1 00:05:26.848 --rc genhtml_legend=1 00:05:26.848 --rc geninfo_all_blocks=1 00:05:26.848 --no-external' 00:05:26.848 08:32:01 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:28.756 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:28.756 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:28.757 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:28.757 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:29.015 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:29.015 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:25.269 08:32:53 unittest -- unit/unittest.sh@208 -- # uname -m 00:06:25.269 08:32:53 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:06:25.269 08:32:53 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:25.269 08:32:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.269 08:32:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.269 08:32:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:25.269 ************************************ 00:06:25.269 START TEST unittest_pci_event 00:06:25.269 ************************************ 00:06:25.270 08:32:53 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:25.270 00:06:25.270 00:06:25.270 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.270 http://cunit.sourceforge.net/ 00:06:25.270 00:06:25.270 00:06:25.270 Suite: pci_event 00:06:25.270 Test: test_pci_parse_event ...[2024-07-12 08:32:53.598193] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:25.270 [2024-07-12 08:32:53.598701] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:25.270 passed 00:06:25.270 00:06:25.270 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.270 suites 1 1 n/a 0 0 00:06:25.270 tests 1 1 1 0 0 00:06:25.270 asserts 15 15 15 0 n/a 00:06:25.270 00:06:25.270 Elapsed time = 0.001 seconds 00:06:25.270 00:06:25.270 real 0m0.039s 00:06:25.270 user 0m0.028s 00:06:25.270 sys 0m0.006s 00:06:25.270 08:32:53 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.270 08:32:53 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 ************************************ 00:06:25.270 END TEST unittest_pci_event 00:06:25.270 ************************************ 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:25.270 08:32:53 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 ************************************ 00:06:25.270 START TEST unittest_include 00:06:25.270 ************************************ 00:06:25.270 08:32:53 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:25.270 00:06:25.270 00:06:25.270 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.270 http://cunit.sourceforge.net/ 00:06:25.270 00:06:25.270 00:06:25.270 Suite: histogram 00:06:25.270 Test: histogram_test ...passed 00:06:25.270 Test: histogram_merge ...passed 00:06:25.270 00:06:25.270 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.270 suites 1 1 n/a 0 0 00:06:25.270 tests 2 2 2 0 0 00:06:25.270 asserts 50 50 50 0 n/a 00:06:25.270 00:06:25.270 Elapsed time = 0.006 seconds 00:06:25.270 00:06:25.270 real 0m0.031s 00:06:25.270 user 0m0.026s 00:06:25.270 sys 0m0.005s 00:06:25.270 08:32:53 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.270 08:32:53 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 ************************************ 00:06:25.270 END TEST unittest_include 00:06:25.270 ************************************ 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:25.270 08:32:53 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.270 08:32:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 ************************************ 00:06:25.270 START TEST unittest_bdev 00:06:25.270 ************************************ 00:06:25.270 08:32:53 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:06:25.270 08:32:53 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:25.270 00:06:25.270 00:06:25.270 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.270 http://cunit.sourceforge.net/ 00:06:25.270 00:06:25.270 00:06:25.270 Suite: bdev 00:06:25.270 Test: bytes_to_blocks_test ...passed 00:06:25.270 Test: num_blocks_test ...passed 00:06:25.270 Test: io_valid_test ...passed 00:06:25.270 Test: open_write_test ...[2024-07-12 08:32:53.847480] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:25.270 [2024-07-12 08:32:53.848138] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:25.270 [2024-07-12 08:32:53.848458] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:25.270 passed 00:06:25.270 Test: claim_test ...passed 00:06:25.270 Test: alias_add_del_test ...[2024-07-12 08:32:53.945082] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:25.270 [2024-07-12 08:32:53.945364] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:25.270 [2024-07-12 08:32:53.945446] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:25.270 passed 00:06:25.270 Test: get_device_stat_test ...passed 00:06:25.270 Test: bdev_io_types_test ...passed 00:06:25.270 Test: bdev_io_wait_test ...passed 00:06:25.270 Test: bdev_io_spans_split_test ...passed 00:06:25.270 Test: bdev_io_boundary_split_test ...passed 00:06:25.270 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-12 08:32:54.108826] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:25.270 passed 00:06:25.270 Test: bdev_io_mix_split_test ...passed 00:06:25.270 Test: bdev_io_split_with_io_wait ...passed 00:06:25.270 Test: bdev_io_write_unit_split_test ...[2024-07-12 08:32:54.222323] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:25.270 [2024-07-12 08:32:54.222717] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:25.270 [2024-07-12 08:32:54.222780] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:25.270 [2024-07-12 08:32:54.222923] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:25.270 passed 00:06:25.270 Test: bdev_io_alignment_with_boundary ...passed 00:06:25.270 Test: bdev_io_alignment ...passed 00:06:25.270 Test: bdev_histograms ...passed 00:06:25.270 Test: bdev_write_zeroes ...passed 00:06:25.270 Test: bdev_compare_and_write ...passed 00:06:25.270 Test: bdev_compare ...passed 00:06:25.270 Test: bdev_compare_emulated ...passed 00:06:25.270 Test: bdev_zcopy_write ...passed 00:06:25.270 Test: bdev_zcopy_read ...passed 00:06:25.270 Test: bdev_open_while_hotremove ...passed 00:06:25.270 Test: bdev_close_while_hotremove ...passed 00:06:25.270 Test: bdev_open_ext_test ...[2024-07-12 08:32:54.686854] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:25.270 passed 00:06:25.270 Test: bdev_open_ext_unregister ...[2024-07-12 08:32:54.687306] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:25.270 passed 00:06:25.270 Test: bdev_set_io_timeout ...passed 00:06:25.270 Test: bdev_set_qd_sampling ...passed 00:06:25.270 Test: lba_range_overlap ...passed 00:06:25.270 Test: lock_lba_range_check_ranges ...passed 00:06:25.270 Test: lock_lba_range_with_io_outstanding ...passed 00:06:25.270 Test: lock_lba_range_overlapped ...passed 00:06:25.270 Test: bdev_quiesce ...[2024-07-12 08:32:54.881636] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:25.270 passed 00:06:25.270 Test: bdev_io_abort ...passed 00:06:25.270 Test: bdev_unmap ...passed 00:06:25.270 Test: bdev_write_zeroes_split_test ...passed 00:06:25.270 Test: bdev_set_options_test ...[2024-07-12 08:32:55.043214] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:25.270 passed 00:06:25.270 Test: bdev_get_memory_domains ...passed 00:06:25.270 Test: bdev_io_ext ...passed 00:06:25.270 Test: bdev_io_ext_no_opts ...passed 00:06:25.270 Test: bdev_io_ext_invalid_opts ...passed 00:06:25.270 Test: bdev_io_ext_split ...passed 00:06:25.270 Test: bdev_io_ext_bounce_buffer ...passed 00:06:25.270 Test: bdev_register_uuid_alias ...[2024-07-12 08:32:55.302897] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 9481b2d5-71a4-4e8f-aefe-dfc0b74a58cf already exists 00:06:25.270 [2024-07-12 08:32:55.303017] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:9481b2d5-71a4-4e8f-aefe-dfc0b74a58cf alias for bdev bdev0 00:06:25.270 passed 00:06:25.270 Test: bdev_unregister_by_name ...[2024-07-12 08:32:55.329851] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:25.270 [2024-07-12 08:32:55.329954] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7982:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:25.270 passed 00:06:25.270 Test: for_each_bdev_test ...passed 00:06:25.270 Test: bdev_seek_test ...passed 00:06:25.270 Test: bdev_copy ...passed 00:06:25.270 Test: bdev_copy_split_test ...passed 00:06:25.270 Test: examine_locks ...passed 00:06:25.270 Test: claim_v2_rwo ...[2024-07-12 08:32:55.456216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.456360] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.456398] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.456503] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.456547] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.456646] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:25.270 passed 00:06:25.270 Test: claim_v2_rom ...[2024-07-12 08:32:55.457015] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.457123] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.457170] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.457211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:25.270 [2024-07-12 08:32:55.457299] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:25.270 [2024-07-12 08:32:55.457360] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:25.271 passed 00:06:25.271 Test: claim_v2_rwm ...[2024-07-12 08:32:55.457579] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:25.271 [2024-07-12 08:32:55.457672] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.457715] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.457758] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.457797] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.457843] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.457951] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:25.271 passed 00:06:25.271 Test: claim_v2_existing_writer ...[2024-07-12 08:32:55.458212] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:25.271 passed 00:06:25.271 Test: claim_v2_existing_v1 ...[2024-07-12 08:32:55.458268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:25.271 [2024-07-12 08:32:55.458477] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.458529] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.458562] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:25.271 passed 00:06:25.271 Test: claim_v1_existing_v2 ...[2024-07-12 08:32:55.458750] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.458832] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:25.271 [2024-07-12 08:32:55.458889] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:25.271 passed 00:06:25.271 Test: examine_claimed ...[2024-07-12 08:32:55.459424] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:25.271 passed 00:06:25.271 00:06:25.271 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.271 suites 1 1 n/a 0 0 00:06:25.271 tests 59 59 59 0 0 00:06:25.271 asserts 4599 4599 4599 0 n/a 00:06:25.271 00:06:25.271 Elapsed time = 1.672 seconds 00:06:25.271 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:25.271 00:06:25.271 00:06:25.271 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.271 http://cunit.sourceforge.net/ 00:06:25.271 00:06:25.271 00:06:25.271 Suite: nvme 00:06:25.271 Test: test_create_ctrlr ...passed 00:06:25.271 Test: test_reset_ctrlr ...[2024-07-12 08:32:55.505942] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:25.271 Test: test_failover_ctrlr ...passed 00:06:25.271 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-12 08:32:55.508062] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.508232] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.508458] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_pending_reset ...[2024-07-12 08:32:55.509705] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.509921] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_attach_ctrlr ...[2024-07-12 08:32:55.510826] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:25.271 passed 00:06:25.271 Test: test_aer_cb ...passed 00:06:25.271 Test: test_submit_nvme_cmd ...passed 00:06:25.271 Test: test_add_remove_trid ...passed 00:06:25.271 Test: test_abort ...[2024-07-12 08:32:55.513698] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:25.271 passed 00:06:25.271 Test: test_get_io_qpair ...passed 00:06:25.271 Test: test_bdev_unregister ...passed 00:06:25.271 Test: test_compare_ns ...passed 00:06:25.271 Test: test_init_ana_log_page ...passed 00:06:25.271 Test: test_get_memory_domains ...passed 00:06:25.271 Test: test_reconnect_qpair ...[2024-07-12 08:32:55.516143] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_create_bdev_ctrlr ...[2024-07-12 08:32:55.516648] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:25.271 passed 00:06:25.271 Test: test_add_multi_ns_to_bdev ...[2024-07-12 08:32:55.517774] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:25.271 passed 00:06:25.271 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:25.271 Test: test_admin_path ...passed 00:06:25.271 Test: test_reset_bdev_ctrlr ...passed 00:06:25.271 Test: test_find_io_path ...passed 00:06:25.271 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:25.271 Test: test_retry_io_for_io_path_error ...passed 00:06:25.271 Test: test_retry_io_count ...passed 00:06:25.271 Test: test_concurrent_read_ana_log_page ...passed 00:06:25.271 Test: test_retry_io_for_ana_error ...passed 00:06:25.271 Test: test_check_io_error_resiliency_params ...[2024-07-12 08:32:55.525089] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:25.271 [2024-07-12 08:32:55.525234] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:25.271 [2024-07-12 08:32:55.525340] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:25.271 [2024-07-12 08:32:55.525441] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:25.271 [2024-07-12 08:32:55.525565] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:25.271 [2024-07-12 08:32:55.525681] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:25.271 [2024-07-12 08:32:55.525726] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:25.271 [2024-07-12 08:32:55.525846] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:25.271 [2024-07-12 08:32:55.525903] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:25.271 passed 00:06:25.271 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:25.271 Test: test_reconnect_ctrlr ...[2024-07-12 08:32:55.527065] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.527316] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.527618] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.527846] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.528043] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_retry_failover_ctrlr ...[2024-07-12 08:32:55.528591] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_fail_path ...[2024-07-12 08:32:55.529337] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.529601] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.529789] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.529978] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.530232] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_nvme_ns_cmp ...passed 00:06:25.271 Test: test_ana_transition ...passed 00:06:25.271 Test: test_set_preferred_path ...passed 00:06:25.271 Test: test_find_next_io_path ...passed 00:06:25.271 Test: test_find_io_path_min_qd ...passed 00:06:25.271 Test: test_disable_auto_failback ...[2024-07-12 08:32:55.532651] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_set_multipath_policy ...passed 00:06:25.271 Test: test_uuid_generation ...passed 00:06:25.271 Test: test_retry_io_to_same_path ...passed 00:06:25.271 Test: test_race_between_reset_and_disconnected ...passed 00:06:25.271 Test: test_ctrlr_op_rpc ...passed 00:06:25.271 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:25.271 Test: test_disable_enable_ctrlr ...[2024-07-12 08:32:55.537030] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 [2024-07-12 08:32:55.537318] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:25.271 passed 00:06:25.271 Test: test_delete_ctrlr_done ...passed 00:06:25.271 Test: test_ns_remove_during_reset ...passed 00:06:25.271 Test: test_io_path_is_current ...passed 00:06:25.271 00:06:25.271 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.271 suites 1 1 n/a 0 0 00:06:25.271 tests 49 49 49 0 0 00:06:25.271 asserts 3577 3577 3577 0 n/a 00:06:25.271 00:06:25.272 Elapsed time = 0.028 seconds 00:06:25.272 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:25.272 00:06:25.272 00:06:25.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.272 http://cunit.sourceforge.net/ 00:06:25.272 00:06:25.272 Test Options 00:06:25.272 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:25.272 00:06:25.272 Suite: raid 00:06:25.272 Test: test_create_raid ...passed 00:06:25.272 Test: test_create_raid_superblock ...passed 00:06:25.272 Test: test_delete_raid ...passed 00:06:25.272 Test: test_create_raid_invalid_args ...[2024-07-12 08:32:55.580588] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:25.272 [2024-07-12 08:32:55.581133] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:25.272 [2024-07-12 08:32:55.581957] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:25.272 [2024-07-12 08:32:55.582314] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:25.272 [2024-07-12 08:32:55.582524] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:25.272 [2024-07-12 08:32:55.583695] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:25.272 [2024-07-12 08:32:55.583863] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:25.272 passed 00:06:25.272 Test: test_delete_raid_invalid_args ...passed 00:06:25.272 Test: test_io_channel ...passed 00:06:25.272 Test: test_reset_io ...passed 00:06:25.272 Test: test_multi_raid ...passed 00:06:25.272 Test: test_io_type_supported ...passed 00:06:25.272 Test: test_raid_json_dump_info ...passed 00:06:25.272 Test: test_context_size ...passed 00:06:25.272 Test: test_raid_level_conversions ...passed 00:06:25.272 Test: test_raid_io_split ...passed 00:06:25.272 Test: test_raid_process ...passed 00:06:25.272 00:06:25.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.272 suites 1 1 n/a 0 0 00:06:25.272 tests 14 14 14 0 0 00:06:25.272 asserts 6183 6183 6183 0 n/a 00:06:25.272 00:06:25.272 Elapsed time = 0.024 seconds 00:06:25.272 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:25.272 00:06:25.272 00:06:25.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.272 http://cunit.sourceforge.net/ 00:06:25.272 00:06:25.272 00:06:25.272 Suite: raid_sb 00:06:25.272 Test: test_raid_bdev_write_superblock ...passed 00:06:25.272 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:25.272 Test: test_raid_bdev_parse_superblock ...[2024-07-12 08:32:55.644232] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:25.272 passed 00:06:25.272 Suite: raid_sb_md 00:06:25.272 Test: test_raid_bdev_write_superblock ...passed 00:06:25.272 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:25.272 Test: test_raid_bdev_parse_superblock ...[2024-07-12 08:32:55.645433] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:25.272 passed 00:06:25.272 Suite: raid_sb_md_interleaved 00:06:25.272 Test: test_raid_bdev_write_superblock ...passed 00:06:25.272 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:25.272 Test: test_raid_bdev_parse_superblock ...[2024-07-12 08:32:55.646343] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:25.272 passed 00:06:25.272 00:06:25.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.272 suites 3 3 n/a 0 0 00:06:25.272 tests 9 9 9 0 0 00:06:25.272 asserts 139 139 139 0 n/a 00:06:25.272 00:06:25.272 Elapsed time = 0.002 seconds 00:06:25.272 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:25.272 00:06:25.272 00:06:25.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.272 http://cunit.sourceforge.net/ 00:06:25.272 00:06:25.272 00:06:25.272 Suite: concat 00:06:25.272 Test: test_concat_start ...passed 00:06:25.272 Test: test_concat_rw ...passed 00:06:25.272 Test: test_concat_null_payload ...passed 00:06:25.272 00:06:25.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.272 suites 1 1 n/a 0 0 00:06:25.272 tests 3 3 3 0 0 00:06:25.272 asserts 8460 8460 8460 0 n/a 00:06:25.272 00:06:25.272 Elapsed time = 0.008 seconds 00:06:25.272 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:06:25.272 00:06:25.272 00:06:25.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.272 http://cunit.sourceforge.net/ 00:06:25.272 00:06:25.272 00:06:25.272 Suite: raid0 00:06:25.272 Test: test_write_io ...passed 00:06:25.272 Test: test_read_io ...passed 00:06:25.272 Test: test_unmap_io ...passed 00:06:25.272 Test: test_io_failure ...passed 00:06:25.272 Suite: raid0_dif 00:06:25.272 Test: test_write_io ...passed 00:06:25.272 Test: test_read_io ...passed 00:06:25.272 Test: test_unmap_io ...passed 00:06:25.272 Test: test_io_failure ...passed 00:06:25.272 00:06:25.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.272 suites 2 2 n/a 0 0 00:06:25.272 tests 8 8 8 0 0 00:06:25.272 asserts 368291 368291 368291 0 n/a 00:06:25.272 00:06:25.272 Elapsed time = 0.111 seconds 00:06:25.272 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:25.272 00:06:25.272 00:06:25.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.272 http://cunit.sourceforge.net/ 00:06:25.272 00:06:25.272 00:06:25.272 Suite: raid1 00:06:25.272 Test: test_raid1_start ...passed 00:06:25.272 Test: test_raid1_read_balancing ...passed 00:06:25.272 Test: test_raid1_write_error ...passed 00:06:25.272 Test: test_raid1_read_error ...passed 00:06:25.272 00:06:25.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.272 suites 1 1 n/a 0 0 00:06:25.272 tests 4 4 4 0 0 00:06:25.272 asserts 4374 4374 4374 0 n/a 00:06:25.272 00:06:25.272 Elapsed time = 0.006 seconds 00:06:25.272 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:25.272 00:06:25.272 00:06:25.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.272 http://cunit.sourceforge.net/ 00:06:25.272 00:06:25.272 00:06:25.272 Suite: zone 00:06:25.272 Test: test_zone_get_operation ...passed 00:06:25.272 Test: test_bdev_zone_get_info ...passed 00:06:25.272 Test: test_bdev_zone_management ...passed 00:06:25.272 Test: test_bdev_zone_append ...passed 00:06:25.272 Test: test_bdev_zone_append_with_md ...passed 00:06:25.272 Test: test_bdev_zone_appendv ...passed 00:06:25.272 Test: test_bdev_zone_appendv_with_md ...passed 00:06:25.272 Test: test_bdev_io_get_append_location ...passed 00:06:25.272 00:06:25.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.272 suites 1 1 n/a 0 0 00:06:25.272 tests 8 8 8 0 0 00:06:25.272 asserts 94 94 94 0 n/a 00:06:25.272 00:06:25.272 Elapsed time = 0.001 seconds 00:06:25.272 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:25.272 00:06:25.272 00:06:25.272 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.272 http://cunit.sourceforge.net/ 00:06:25.272 00:06:25.272 00:06:25.272 Suite: gpt_parse 00:06:25.272 Test: test_parse_mbr_and_primary ...[2024-07-12 08:32:55.968537] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:25.272 [2024-07-12 08:32:55.969015] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:25.272 [2024-07-12 08:32:55.969202] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:25.272 [2024-07-12 08:32:55.969394] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:25.272 [2024-07-12 08:32:55.969539] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:25.272 [2024-07-12 08:32:55.969718] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:25.272 passed 00:06:25.272 Test: test_parse_secondary ...[2024-07-12 08:32:55.970730] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:25.272 [2024-07-12 08:32:55.970898] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:25.272 [2024-07-12 08:32:55.971032] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:25.272 [2024-07-12 08:32:55.971152] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:25.272 passed 00:06:25.272 Test: test_check_mbr ...[2024-07-12 08:32:55.972190] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:25.272 [2024-07-12 08:32:55.972386] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:25.272 passed 00:06:25.272 Test: test_read_header ...[2024-07-12 08:32:55.972586] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:25.272 [2024-07-12 08:32:55.972798] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:25.272 [2024-07-12 08:32:55.972965] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:25.272 [2024-07-12 08:32:55.973120] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:25.272 [2024-07-12 08:32:55.973246] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:25.273 [2024-07-12 08:32:55.973364] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:25.273 passed 00:06:25.273 Test: test_read_partitions ...[2024-07-12 08:32:55.973537] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:25.273 [2024-07-12 08:32:55.973669] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:25.273 [2024-07-12 08:32:55.973817] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:25.273 [2024-07-12 08:32:55.973947] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:25.273 [2024-07-12 08:32:55.974464] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:25.273 passed 00:06:25.273 00:06:25.273 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.273 suites 1 1 n/a 0 0 00:06:25.273 tests 5 5 5 0 0 00:06:25.273 asserts 33 33 33 0 n/a 00:06:25.273 00:06:25.273 Elapsed time = 0.005 seconds 00:06:25.273 08:32:55 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:25.273 00:06:25.273 00:06:25.273 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.273 http://cunit.sourceforge.net/ 00:06:25.273 00:06:25.273 00:06:25.273 Suite: bdev_part 00:06:25.273 Test: part_test ...[2024-07-12 08:32:56.014170] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 1969a209-242e-5f32-90cc-bae141023866 already exists 00:06:25.273 [2024-07-12 08:32:56.014601] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:1969a209-242e-5f32-90cc-bae141023866 alias for bdev test1 00:06:25.273 passed 00:06:25.273 Test: part_free_test ...passed 00:06:25.273 Test: part_get_io_channel_test ...passed 00:06:25.273 Test: part_construct_ext ...passed 00:06:25.273 00:06:25.273 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.273 suites 1 1 n/a 0 0 00:06:25.273 tests 4 4 4 0 0 00:06:25.273 asserts 48 48 48 0 n/a 00:06:25.273 00:06:25.273 Elapsed time = 0.046 seconds 00:06:25.273 08:32:56 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:25.273 00:06:25.273 00:06:25.273 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.273 http://cunit.sourceforge.net/ 00:06:25.273 00:06:25.273 00:06:25.273 Suite: scsi_nvme_suite 00:06:25.273 Test: scsi_nvme_translate_test ...passed 00:06:25.273 00:06:25.273 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.273 suites 1 1 n/a 0 0 00:06:25.273 tests 1 1 1 0 0 00:06:25.273 asserts 104 104 104 0 n/a 00:06:25.273 00:06:25.273 Elapsed time = 0.000 seconds 00:06:25.273 08:32:56 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:25.273 00:06:25.273 00:06:25.273 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.273 http://cunit.sourceforge.net/ 00:06:25.273 00:06:25.273 00:06:25.273 Suite: lvol 00:06:25.273 Test: ut_lvs_init ...[2024-07-12 08:32:56.139709] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:25.273 [2024-07-12 08:32:56.140155] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:25.273 passed 00:06:25.273 Test: ut_lvol_init ...passed 00:06:25.273 Test: ut_lvol_snapshot ...passed 00:06:25.273 Test: ut_lvol_clone ...passed 00:06:25.273 Test: ut_lvs_destroy ...passed 00:06:25.273 Test: ut_lvs_unload ...passed 00:06:25.273 Test: ut_lvol_resize ...[2024-07-12 08:32:56.142662] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:25.273 passed 00:06:25.273 Test: ut_lvol_set_read_only ...passed 00:06:25.273 Test: ut_lvol_hotremove ...passed 00:06:25.273 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:25.273 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:25.273 Test: ut_lvol_read_write ...passed 00:06:25.273 Test: ut_vbdev_lvol_submit_request ...passed 00:06:25.273 Test: ut_lvol_examine_config ...passed 00:06:25.273 Test: ut_lvol_examine_disk ...[2024-07-12 08:32:56.144351] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:25.273 passed 00:06:25.273 Test: ut_lvol_rename ...[2024-07-12 08:32:56.145586] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:25.273 [2024-07-12 08:32:56.145760] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:25.273 passed 00:06:25.273 Test: ut_bdev_finish ...passed 00:06:25.273 Test: ut_lvs_rename ...passed 00:06:25.273 Test: ut_lvol_seek ...passed 00:06:25.273 Test: ut_esnap_dev_create ...[2024-07-12 08:32:56.147129] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:25.273 [2024-07-12 08:32:56.147275] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:25.273 [2024-07-12 08:32:56.147409] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:25.273 passed 00:06:25.273 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-12 08:32:56.147605] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:25.273 [2024-07-12 08:32:56.147710] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:25.273 passed 00:06:25.273 Test: ut_lvol_shallow_copy ...[2024-07-12 08:32:56.148252] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:25.273 [2024-07-12 08:32:56.148397] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:06:25.273 passed 00:06:25.273 Test: ut_lvol_set_external_parent ...[2024-07-12 08:32:56.148747] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:25.273 passed 00:06:25.273 00:06:25.273 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.273 suites 1 1 n/a 0 0 00:06:25.273 tests 23 23 23 0 0 00:06:25.273 asserts 770 770 770 0 n/a 00:06:25.273 00:06:25.273 Elapsed time = 0.006 seconds 00:06:25.273 08:32:56 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:25.273 00:06:25.273 00:06:25.273 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.273 http://cunit.sourceforge.net/ 00:06:25.273 00:06:25.273 00:06:25.273 Suite: zone_block 00:06:25.273 Test: test_zone_block_create ...passed 00:06:25.273 Test: test_zone_block_create_invalid ...[2024-07-12 08:32:56.204368] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:25.273 [2024-07-12 08:32:56.204832] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-12 08:32:56.205118] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:25.273 [2024-07-12 08:32:56.205270] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-12 08:32:56.205517] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:25.273 [2024-07-12 08:32:56.205558] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-12 08:32:56.205637] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:25.273 [2024-07-12 08:32:56.205697] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:25.273 Test: test_get_zone_info ...[2024-07-12 08:32:56.206190] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.206378] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.206531] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 passed 00:06:25.273 Test: test_supported_io_types ...passed 00:06:25.273 Test: test_reset_zone ...[2024-07-12 08:32:56.207819] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.207975] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 passed 00:06:25.273 Test: test_open_zone ...[2024-07-12 08:32:56.208767] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.209553] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.209732] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 passed 00:06:25.273 Test: test_zone_write ...[2024-07-12 08:32:56.210554] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:25.273 [2024-07-12 08:32:56.210728] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.210901] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:25.273 [2024-07-12 08:32:56.211057] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.216581] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:25.273 [2024-07-12 08:32:56.216755] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.216916] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:25.273 [2024-07-12 08:32:56.217033] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.273 [2024-07-12 08:32:56.222448] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:25.274 [2024-07-12 08:32:56.222641] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 passed 00:06:25.274 Test: test_zone_read ...[2024-07-12 08:32:56.223408] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:25.274 [2024-07-12 08:32:56.223570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.223732] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:25.274 [2024-07-12 08:32:56.223849] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.224424] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:25.274 [2024-07-12 08:32:56.224579] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 passed 00:06:25.274 Test: test_close_zone ...[2024-07-12 08:32:56.225248] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.225453] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.225743] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.225901] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 passed 00:06:25.274 Test: test_finish_zone ...[2024-07-12 08:32:56.226820] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.227008] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 passed 00:06:25.274 Test: test_append_zone ...[2024-07-12 08:32:56.227677] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:25.274 [2024-07-12 08:32:56.227841] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.228006] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:25.274 [2024-07-12 08:32:56.228122] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 [2024-07-12 08:32:56.238798] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:25.274 [2024-07-12 08:32:56.238972] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:25.274 passed 00:06:25.274 00:06:25.274 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.274 suites 1 1 n/a 0 0 00:06:25.274 tests 11 11 11 0 0 00:06:25.274 asserts 3437 3437 3437 0 n/a 00:06:25.274 00:06:25.274 Elapsed time = 0.031 seconds 00:06:25.274 08:32:56 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:25.274 00:06:25.274 00:06:25.274 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.274 http://cunit.sourceforge.net/ 00:06:25.274 00:06:25.274 00:06:25.274 Suite: bdev 00:06:25.274 Test: basic ...[2024-07-12 08:32:56.334708] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558d041247c1): Operation not permitted (rc=-1) 00:06:25.274 [2024-07-12 08:32:56.335236] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x558d04124780): Operation not permitted (rc=-1) 00:06:25.274 [2024-07-12 08:32:56.335382] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558d041247c1): Operation not permitted (rc=-1) 00:06:25.274 passed 00:06:25.274 Test: unregister_and_close ...passed 00:06:25.274 Test: unregister_and_close_different_threads ...passed 00:06:25.274 Test: basic_qos ...passed 00:06:25.274 Test: put_channel_during_reset ...passed 00:06:25.274 Test: aborted_reset ...passed 00:06:25.274 Test: aborted_reset_no_outstanding_io ...passed 00:06:25.274 Test: io_during_reset ...passed 00:06:25.274 Test: reset_completions ...passed 00:06:25.274 Test: io_during_qos_queue ...passed 00:06:25.274 Test: io_during_qos_reset ...passed 00:06:25.274 Test: enomem ...passed 00:06:25.274 Test: enomem_multi_bdev ...passed 00:06:25.274 Test: enomem_multi_bdev_unregister ...passed 00:06:25.274 Test: enomem_multi_io_target ...passed 00:06:25.274 Test: qos_dynamic_enable ...passed 00:06:25.274 Test: bdev_histograms_mt ...passed 00:06:25.274 Test: bdev_set_io_timeout_mt ...[2024-07-12 08:32:57.150876] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:25.274 passed 00:06:25.274 Test: lock_lba_range_then_submit_io ...[2024-07-12 08:32:57.172400] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x558d04124740 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:25.274 passed 00:06:25.274 Test: unregister_during_reset ...passed 00:06:25.274 Test: event_notify_and_close ...passed 00:06:25.274 Test: unregister_and_qos_poller ...passed 00:06:25.274 Suite: bdev_wrong_thread 00:06:25.274 Test: spdk_bdev_register_wt ...[2024-07-12 08:32:57.317398] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:06:25.274 passed 00:06:25.274 Test: spdk_bdev_examine_wt ...[2024-07-12 08:32:57.317920] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:06:25.274 passed 00:06:25.274 00:06:25.274 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.274 suites 2 2 n/a 0 0 00:06:25.274 tests 24 24 24 0 0 00:06:25.274 asserts 621 621 621 0 n/a 00:06:25.274 00:06:25.274 Elapsed time = 0.998 seconds 00:06:25.274 ************************************ 00:06:25.274 END TEST unittest_bdev 00:06:25.274 ************************************ 00:06:25.274 00:06:25.274 real 0m3.607s 00:06:25.274 user 0m1.721s 00:06:25.274 sys 0m1.828s 00:06:25.274 08:32:57 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.274 08:32:57 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:25.274 08:32:57 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:25.274 08:32:57 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:25.274 08:32:57 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:25.274 08:32:57 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:25.274 08:32:57 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:25.274 08:32:57 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:25.274 08:32:57 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.274 08:32:57 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.274 08:32:57 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:25.274 ************************************ 00:06:25.274 START TEST unittest_bdev_raid5f 00:06:25.274 ************************************ 00:06:25.274 08:32:57 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:25.274 00:06:25.274 00:06:25.274 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.274 http://cunit.sourceforge.net/ 00:06:25.274 00:06:25.274 00:06:25.274 Suite: raid5f 00:06:25.274 Test: test_raid5f_start ...passed 00:06:25.274 Test: test_raid5f_submit_read_request ...passed 00:06:25.274 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:27.799 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:54.345 Test: test_raid5f_chunk_write_error ...passed 00:07:04.312 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:08.498 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:55.212 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:55.212 00:07:55.212 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.212 suites 1 1 n/a 0 0 00:07:55.212 tests 8 8 8 0 0 00:07:55.212 asserts 518158 518158 518158 0 n/a 00:07:55.212 00:07:55.212 Elapsed time = 85.651 seconds 00:07:55.212 ************************************ 00:07:55.212 END TEST unittest_bdev_raid5f 00:07:55.212 ************************************ 00:07:55.212 00:07:55.212 real 1m25.731s 00:07:55.212 user 1m21.646s 00:07:55.212 sys 0m4.077s 00:07:55.212 08:34:23 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.212 08:34:23 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:07:55.212 08:34:23 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:55.212 08:34:23 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:07:55.212 08:34:23 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.212 08:34:23 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.212 08:34:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:55.212 ************************************ 00:07:55.212 START TEST unittest_blob_blobfs 00:07:55.213 ************************************ 00:07:55.213 08:34:23 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:07:55.213 08:34:23 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:55.213 08:34:23 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:55.213 00:07:55.213 00:07:55.213 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.213 http://cunit.sourceforge.net/ 00:07:55.213 00:07:55.213 00:07:55.213 Suite: blob_nocopy_noextent 00:07:55.213 Test: blob_init ...[2024-07-12 08:34:23.230905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:55.213 passed 00:07:55.213 Test: blob_thin_provision ...passed 00:07:55.213 Test: blob_read_only ...passed 00:07:55.213 Test: bs_load ...[2024-07-12 08:34:23.331293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:55.213 passed 00:07:55.213 Test: bs_load_custom_cluster_size ...passed 00:07:55.213 Test: bs_load_after_failed_grow ...passed 00:07:55.213 Test: bs_cluster_sz ...[2024-07-12 08:34:23.364996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:55.213 [2024-07-12 08:34:23.365637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:55.213 [2024-07-12 08:34:23.365972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:55.213 passed 00:07:55.213 Test: bs_resize_md ...passed 00:07:55.213 Test: bs_destroy ...passed 00:07:55.213 Test: bs_type ...passed 00:07:55.213 Test: bs_super_block ...passed 00:07:55.213 Test: bs_test_recover_cluster_count ...passed 00:07:55.213 Test: bs_grow_live ...passed 00:07:55.213 Test: bs_grow_live_no_space ...passed 00:07:55.213 Test: bs_test_grow ...passed 00:07:55.213 Test: blob_serialize_test ...passed 00:07:55.213 Test: super_block_crc ...passed 00:07:55.213 Test: blob_thin_prov_write_count_io ...passed 00:07:55.213 Test: blob_thin_prov_unmap_cluster ...passed 00:07:55.213 Test: bs_load_iter_test ...passed 00:07:55.213 Test: blob_relations ...[2024-07-12 08:34:23.573275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.213 [2024-07-12 08:34:23.573553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.574768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.213 [2024-07-12 08:34:23.574988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 passed 00:07:55.213 Test: blob_relations2 ...[2024-07-12 08:34:23.591248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.213 [2024-07-12 08:34:23.591549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.591642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.213 [2024-07-12 08:34:23.591801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.593584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.213 [2024-07-12 08:34:23.593791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.594443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.213 [2024-07-12 08:34:23.594639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 passed 00:07:55.213 Test: blob_relations3 ...passed 00:07:55.213 Test: blobstore_clean_power_failure ...passed 00:07:55.213 Test: blob_delete_snapshot_power_failure ...[2024-07-12 08:34:23.781248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:55.213 [2024-07-12 08:34:23.795868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:55.213 [2024-07-12 08:34:23.796201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:55.213 [2024-07-12 08:34:23.796463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.810851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:55.213 [2024-07-12 08:34:23.811253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:55.213 [2024-07-12 08:34:23.811360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:55.213 [2024-07-12 08:34:23.811615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.826360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:55.213 [2024-07-12 08:34:23.826722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.842677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:55.213 [2024-07-12 08:34:23.843070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:23.858658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:55.213 [2024-07-12 08:34:23.858983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 passed 00:07:55.213 Test: blob_create_snapshot_power_failure ...[2024-07-12 08:34:23.903176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:55.213 [2024-07-12 08:34:23.930975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:55.213 [2024-07-12 08:34:23.945017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:55.213 passed 00:07:55.213 Test: blob_io_unit ...passed 00:07:55.213 Test: blob_io_unit_compatibility ...passed 00:07:55.213 Test: blob_ext_md_pages ...passed 00:07:55.213 Test: blob_esnap_io_4096_4096 ...passed 00:07:55.213 Test: blob_esnap_io_512_512 ...passed 00:07:55.213 Test: blob_esnap_io_4096_512 ...passed 00:07:55.213 Test: blob_esnap_io_512_4096 ...passed 00:07:55.213 Test: blob_esnap_clone_resize ...passed 00:07:55.213 Suite: blob_bs_nocopy_noextent 00:07:55.213 Test: blob_open ...passed 00:07:55.213 Test: blob_create ...[2024-07-12 08:34:24.278412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:55.213 passed 00:07:55.213 Test: blob_create_loop ...passed 00:07:55.213 Test: blob_create_fail ...[2024-07-12 08:34:24.397169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.213 passed 00:07:55.213 Test: blob_create_internal ...passed 00:07:55.213 Test: blob_create_zero_extent ...passed 00:07:55.213 Test: blob_snapshot ...passed 00:07:55.213 Test: blob_clone ...passed 00:07:55.213 Test: blob_inflate ...[2024-07-12 08:34:24.615583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:55.213 passed 00:07:55.213 Test: blob_delete ...passed 00:07:55.213 Test: blob_resize_test ...[2024-07-12 08:34:24.693431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:55.213 passed 00:07:55.213 Test: blob_resize_thin_test ...passed 00:07:55.213 Test: channel_ops ...passed 00:07:55.213 Test: blob_super ...passed 00:07:55.213 Test: blob_rw_verify_iov ...passed 00:07:55.213 Test: blob_unmap ...passed 00:07:55.213 Test: blob_iter ...passed 00:07:55.213 Test: blob_parse_md ...passed 00:07:55.213 Test: bs_load_pending_removal ...passed 00:07:55.213 Test: bs_unload ...[2024-07-12 08:34:25.067535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:55.213 passed 00:07:55.213 Test: bs_usable_clusters ...passed 00:07:55.213 Test: blob_crc ...[2024-07-12 08:34:25.149567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.213 [2024-07-12 08:34:25.149940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.213 passed 00:07:55.213 Test: blob_flags ...passed 00:07:55.213 Test: bs_version ...passed 00:07:55.213 Test: blob_set_xattrs_test ...[2024-07-12 08:34:25.273664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.213 [2024-07-12 08:34:25.273989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.213 passed 00:07:55.213 Test: blob_thin_prov_alloc ...passed 00:07:55.213 Test: blob_insert_cluster_msg_test ...passed 00:07:55.213 Test: blob_thin_prov_rw ...passed 00:07:55.213 Test: blob_thin_prov_rle ...passed 00:07:55.213 Test: blob_thin_prov_rw_iov ...passed 00:07:55.213 Test: blob_snapshot_rw ...passed 00:07:55.213 Test: blob_snapshot_rw_iov ...passed 00:07:55.213 Test: blob_inflate_rw ...passed 00:07:55.213 Test: blob_snapshot_freeze_io ...passed 00:07:55.213 Test: blob_operation_split_rw ...passed 00:07:55.213 Test: blob_operation_split_rw_iov ...passed 00:07:55.213 Test: blob_simultaneous_operations ...[2024-07-12 08:34:26.397682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.213 [2024-07-12 08:34:26.398044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:26.399359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.213 [2024-07-12 08:34:26.399637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:26.412653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.213 [2024-07-12 08:34:26.412824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 [2024-07-12 08:34:26.412982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.213 [2024-07-12 08:34:26.413145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.213 passed 00:07:55.213 Test: blob_persist_test ...passed 00:07:55.213 Test: blob_decouple_snapshot ...passed 00:07:55.213 Test: blob_seek_io_unit ...passed 00:07:55.213 Test: blob_nested_freezes ...passed 00:07:55.213 Test: blob_clone_resize ...passed 00:07:55.213 Test: blob_shallow_copy ...[2024-07-12 08:34:26.733604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:55.214 [2024-07-12 08:34:26.734248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:55.214 [2024-07-12 08:34:26.734591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:55.214 passed 00:07:55.214 Suite: blob_blob_nocopy_noextent 00:07:55.214 Test: blob_write ...passed 00:07:55.214 Test: blob_read ...passed 00:07:55.214 Test: blob_rw_verify ...passed 00:07:55.214 Test: blob_rw_verify_iov_nomem ...passed 00:07:55.214 Test: blob_rw_iov_read_only ...passed 00:07:55.214 Test: blob_xattr ...passed 00:07:55.214 Test: blob_dirty_shutdown ...passed 00:07:55.214 Test: blob_is_degraded ...passed 00:07:55.214 Suite: blob_esnap_bs_nocopy_noextent 00:07:55.214 Test: blob_esnap_create ...passed 00:07:55.214 Test: blob_esnap_thread_add_remove ...passed 00:07:55.214 Test: blob_esnap_clone_snapshot ...passed 00:07:55.214 Test: blob_esnap_clone_inflate ...passed 00:07:55.214 Test: blob_esnap_clone_decouple ...passed 00:07:55.214 Test: blob_esnap_clone_reload ...passed 00:07:55.214 Test: blob_esnap_hotplug ...passed 00:07:55.214 Test: blob_set_parent ...[2024-07-12 08:34:27.422686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:55.214 [2024-07-12 08:34:27.422947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:55.214 [2024-07-12 08:34:27.423272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:55.214 [2024-07-12 08:34:27.423461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:55.214 [2024-07-12 08:34:27.424233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:55.214 passed 00:07:55.214 Test: blob_set_external_parent ...[2024-07-12 08:34:27.468469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:55.214 [2024-07-12 08:34:27.468780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:55.214 [2024-07-12 08:34:27.468939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:55.214 [2024-07-12 08:34:27.469542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:55.214 passed 00:07:55.214 Suite: blob_nocopy_extent 00:07:55.214 Test: blob_init ...[2024-07-12 08:34:27.484281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:55.214 passed 00:07:55.214 Test: blob_thin_provision ...passed 00:07:55.214 Test: blob_read_only ...passed 00:07:55.214 Test: bs_load ...[2024-07-12 08:34:27.541250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:55.214 passed 00:07:55.214 Test: bs_load_custom_cluster_size ...passed 00:07:55.214 Test: bs_load_after_failed_grow ...passed 00:07:55.214 Test: bs_cluster_sz ...[2024-07-12 08:34:27.574469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:55.214 [2024-07-12 08:34:27.574836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:55.214 [2024-07-12 08:34:27.575072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:55.214 passed 00:07:55.214 Test: bs_resize_md ...passed 00:07:55.214 Test: bs_destroy ...passed 00:07:55.214 Test: bs_type ...passed 00:07:55.214 Test: bs_super_block ...passed 00:07:55.214 Test: bs_test_recover_cluster_count ...passed 00:07:55.214 Test: bs_grow_live ...passed 00:07:55.214 Test: bs_grow_live_no_space ...passed 00:07:55.214 Test: bs_test_grow ...passed 00:07:55.214 Test: blob_serialize_test ...passed 00:07:55.214 Test: super_block_crc ...passed 00:07:55.214 Test: blob_thin_prov_write_count_io ...passed 00:07:55.214 Test: blob_thin_prov_unmap_cluster ...passed 00:07:55.214 Test: bs_load_iter_test ...passed 00:07:55.214 Test: blob_relations ...[2024-07-12 08:34:27.793593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.214 [2024-07-12 08:34:27.793944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:27.795152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.214 [2024-07-12 08:34:27.795356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 passed 00:07:55.214 Test: blob_relations2 ...[2024-07-12 08:34:27.813301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.214 [2024-07-12 08:34:27.813627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:27.813720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.214 [2024-07-12 08:34:27.813966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:27.815803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.214 [2024-07-12 08:34:27.816054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:27.816734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.214 [2024-07-12 08:34:27.816934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 passed 00:07:55.214 Test: blob_relations3 ...passed 00:07:55.214 Test: blobstore_clean_power_failure ...passed 00:07:55.214 Test: blob_delete_snapshot_power_failure ...[2024-07-12 08:34:28.009743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:55.214 [2024-07-12 08:34:28.025428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:55.214 [2024-07-12 08:34:28.041167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:55.214 [2024-07-12 08:34:28.041516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:55.214 [2024-07-12 08:34:28.041592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:28.057078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:55.214 [2024-07-12 08:34:28.057433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:55.214 [2024-07-12 08:34:28.057497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:55.214 [2024-07-12 08:34:28.057633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:28.072431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:55.214 [2024-07-12 08:34:28.072838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:55.214 [2024-07-12 08:34:28.073248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:55.214 [2024-07-12 08:34:28.073615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:28.092351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:55.214 [2024-07-12 08:34:28.092747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:28.108317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:55.214 [2024-07-12 08:34:28.108779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 [2024-07-12 08:34:28.124848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:55.214 [2024-07-12 08:34:28.125245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.214 passed 00:07:55.214 Test: blob_create_snapshot_power_failure ...[2024-07-12 08:34:28.170587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:55.214 [2024-07-12 08:34:28.185415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:55.214 [2024-07-12 08:34:28.213175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:55.214 [2024-07-12 08:34:28.227524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:55.214 passed 00:07:55.214 Test: blob_io_unit ...passed 00:07:55.214 Test: blob_io_unit_compatibility ...passed 00:07:55.214 Test: blob_ext_md_pages ...passed 00:07:55.214 Test: blob_esnap_io_4096_4096 ...passed 00:07:55.214 Test: blob_esnap_io_512_512 ...passed 00:07:55.214 Test: blob_esnap_io_4096_512 ...passed 00:07:55.214 Test: blob_esnap_io_512_4096 ...passed 00:07:55.214 Test: blob_esnap_clone_resize ...passed 00:07:55.214 Suite: blob_bs_nocopy_extent 00:07:55.214 Test: blob_open ...passed 00:07:55.214 Test: blob_create ...[2024-07-12 08:34:28.566365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:55.214 passed 00:07:55.214 Test: blob_create_loop ...passed 00:07:55.214 Test: blob_create_fail ...[2024-07-12 08:34:28.704826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.214 passed 00:07:55.214 Test: blob_create_internal ...passed 00:07:55.214 Test: blob_create_zero_extent ...passed 00:07:55.214 Test: blob_snapshot ...passed 00:07:55.214 Test: blob_clone ...passed 00:07:55.214 Test: blob_inflate ...[2024-07-12 08:34:28.935129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:55.214 passed 00:07:55.214 Test: blob_delete ...passed 00:07:55.214 Test: blob_resize_test ...[2024-07-12 08:34:29.015888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:55.214 passed 00:07:55.214 Test: blob_resize_thin_test ...passed 00:07:55.214 Test: channel_ops ...passed 00:07:55.214 Test: blob_super ...passed 00:07:55.214 Test: blob_rw_verify_iov ...passed 00:07:55.214 Test: blob_unmap ...passed 00:07:55.214 Test: blob_iter ...passed 00:07:55.214 Test: blob_parse_md ...passed 00:07:55.214 Test: bs_load_pending_removal ...passed 00:07:55.214 Test: bs_unload ...[2024-07-12 08:34:29.389563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:55.214 passed 00:07:55.214 Test: bs_usable_clusters ...passed 00:07:55.214 Test: blob_crc ...[2024-07-12 08:34:29.473857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.215 [2024-07-12 08:34:29.474470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:55.215 passed 00:07:55.215 Test: blob_flags ...passed 00:07:55.215 Test: bs_version ...passed 00:07:55.215 Test: blob_set_xattrs_test ...[2024-07-12 08:34:29.595416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.215 [2024-07-12 08:34:29.596185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:55.215 passed 00:07:55.215 Test: blob_thin_prov_alloc ...passed 00:07:55.215 Test: blob_insert_cluster_msg_test ...passed 00:07:55.215 Test: blob_thin_prov_rw ...passed 00:07:55.215 Test: blob_thin_prov_rle ...passed 00:07:55.215 Test: blob_thin_prov_rw_iov ...passed 00:07:55.215 Test: blob_snapshot_rw ...passed 00:07:55.215 Test: blob_snapshot_rw_iov ...passed 00:07:55.215 Test: blob_inflate_rw ...passed 00:07:55.215 Test: blob_snapshot_freeze_io ...passed 00:07:55.474 Test: blob_operation_split_rw ...passed 00:07:55.732 Test: blob_operation_split_rw_iov ...passed 00:07:55.732 Test: blob_simultaneous_operations ...[2024-07-12 08:34:30.724865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.732 [2024-07-12 08:34:30.725818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.732 [2024-07-12 08:34:30.727498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.732 [2024-07-12 08:34:30.727782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.733 [2024-07-12 08:34:30.740095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.733 [2024-07-12 08:34:30.740496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.733 [2024-07-12 08:34:30.741101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.733 [2024-07-12 08:34:30.741389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.733 passed 00:07:55.733 Test: blob_persist_test ...passed 00:07:55.733 Test: blob_decouple_snapshot ...passed 00:07:55.991 Test: blob_seek_io_unit ...passed 00:07:55.991 Test: blob_nested_freezes ...passed 00:07:55.991 Test: blob_clone_resize ...passed 00:07:55.991 Test: blob_shallow_copy ...[2024-07-12 08:34:31.077242] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:55.991 [2024-07-12 08:34:31.077718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:55.991 [2024-07-12 08:34:31.078073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:55.991 passed 00:07:55.991 Suite: blob_blob_nocopy_extent 00:07:55.991 Test: blob_write ...passed 00:07:56.250 Test: blob_read ...passed 00:07:56.250 Test: blob_rw_verify ...passed 00:07:56.250 Test: blob_rw_verify_iov_nomem ...passed 00:07:56.250 Test: blob_rw_iov_read_only ...passed 00:07:56.250 Test: blob_xattr ...passed 00:07:56.250 Test: blob_dirty_shutdown ...passed 00:07:56.250 Test: blob_is_degraded ...passed 00:07:56.250 Suite: blob_esnap_bs_nocopy_extent 00:07:56.509 Test: blob_esnap_create ...passed 00:07:56.509 Test: blob_esnap_thread_add_remove ...passed 00:07:56.509 Test: blob_esnap_clone_snapshot ...passed 00:07:56.509 Test: blob_esnap_clone_inflate ...passed 00:07:56.509 Test: blob_esnap_clone_decouple ...passed 00:07:56.509 Test: blob_esnap_clone_reload ...passed 00:07:56.768 Test: blob_esnap_hotplug ...passed 00:07:56.768 Test: blob_set_parent ...[2024-07-12 08:34:31.753947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:56.768 [2024-07-12 08:34:31.754308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:56.768 [2024-07-12 08:34:31.754541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:56.768 [2024-07-12 08:34:31.754697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:56.768 [2024-07-12 08:34:31.755282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:56.768 passed 00:07:56.768 Test: blob_set_external_parent ...[2024-07-12 08:34:31.797152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:56.768 [2024-07-12 08:34:31.797497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:56.768 [2024-07-12 08:34:31.797623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:56.768 [2024-07-12 08:34:31.798209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:56.768 passed 00:07:56.768 Suite: blob_copy_noextent 00:07:56.768 Test: blob_init ...[2024-07-12 08:34:31.813094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:56.768 passed 00:07:56.768 Test: blob_thin_provision ...passed 00:07:56.768 Test: blob_read_only ...passed 00:07:56.768 Test: bs_load ...[2024-07-12 08:34:31.869757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:56.768 passed 00:07:56.768 Test: bs_load_custom_cluster_size ...passed 00:07:56.768 Test: bs_load_after_failed_grow ...passed 00:07:56.768 Test: bs_cluster_sz ...[2024-07-12 08:34:31.900400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:56.768 [2024-07-12 08:34:31.900768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:56.768 [2024-07-12 08:34:31.900937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:56.768 passed 00:07:56.768 Test: bs_resize_md ...passed 00:07:56.768 Test: bs_destroy ...passed 00:07:57.027 Test: bs_type ...passed 00:07:57.027 Test: bs_super_block ...passed 00:07:57.027 Test: bs_test_recover_cluster_count ...passed 00:07:57.027 Test: bs_grow_live ...passed 00:07:57.027 Test: bs_grow_live_no_space ...passed 00:07:57.027 Test: bs_test_grow ...passed 00:07:57.027 Test: blob_serialize_test ...passed 00:07:57.027 Test: super_block_crc ...passed 00:07:57.027 Test: blob_thin_prov_write_count_io ...passed 00:07:57.027 Test: blob_thin_prov_unmap_cluster ...passed 00:07:57.027 Test: bs_load_iter_test ...passed 00:07:57.027 Test: blob_relations ...[2024-07-12 08:34:32.135266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:57.027 [2024-07-12 08:34:32.135574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.027 [2024-07-12 08:34:32.136527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:57.027 [2024-07-12 08:34:32.136752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.027 passed 00:07:57.027 Test: blob_relations2 ...[2024-07-12 08:34:32.154651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:57.027 [2024-07-12 08:34:32.154965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.027 [2024-07-12 08:34:32.155191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:57.027 [2024-07-12 08:34:32.155344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.027 [2024-07-12 08:34:32.156851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:57.028 [2024-07-12 08:34:32.157083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.028 [2024-07-12 08:34:32.157647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:57.028 [2024-07-12 08:34:32.157833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.028 passed 00:07:57.028 Test: blob_relations3 ...passed 00:07:57.287 Test: blobstore_clean_power_failure ...passed 00:07:57.287 Test: blob_delete_snapshot_power_failure ...[2024-07-12 08:34:32.350151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:57.287 [2024-07-12 08:34:32.364858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:57.287 [2024-07-12 08:34:32.365265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:57.287 [2024-07-12 08:34:32.365435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.287 [2024-07-12 08:34:32.380001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:57.287 [2024-07-12 08:34:32.380318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:57.287 [2024-07-12 08:34:32.380458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:57.287 [2024-07-12 08:34:32.380605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.287 [2024-07-12 08:34:32.395268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:57.287 [2024-07-12 08:34:32.395571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.287 [2024-07-12 08:34:32.409878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:57.287 [2024-07-12 08:34:32.410299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.287 [2024-07-12 08:34:32.425059] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:57.287 [2024-07-12 08:34:32.425375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.287 passed 00:07:57.287 Test: blob_create_snapshot_power_failure ...[2024-07-12 08:34:32.469552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:57.546 [2024-07-12 08:34:32.497987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:57.546 [2024-07-12 08:34:32.512778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:57.546 passed 00:07:57.546 Test: blob_io_unit ...passed 00:07:57.546 Test: blob_io_unit_compatibility ...passed 00:07:57.546 Test: blob_ext_md_pages ...passed 00:07:57.546 Test: blob_esnap_io_4096_4096 ...passed 00:07:57.546 Test: blob_esnap_io_512_512 ...passed 00:07:57.546 Test: blob_esnap_io_4096_512 ...passed 00:07:57.806 Test: blob_esnap_io_512_4096 ...passed 00:07:57.806 Test: blob_esnap_clone_resize ...passed 00:07:57.806 Suite: blob_bs_copy_noextent 00:07:57.806 Test: blob_open ...passed 00:07:57.806 Test: blob_create ...[2024-07-12 08:34:32.847830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:57.806 passed 00:07:57.806 Test: blob_create_loop ...passed 00:07:57.806 Test: blob_create_fail ...[2024-07-12 08:34:32.962377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:57.806 passed 00:07:58.064 Test: blob_create_internal ...passed 00:07:58.064 Test: blob_create_zero_extent ...passed 00:07:58.064 Test: blob_snapshot ...passed 00:07:58.064 Test: blob_clone ...passed 00:07:58.064 Test: blob_inflate ...[2024-07-12 08:34:33.172401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:58.064 passed 00:07:58.064 Test: blob_delete ...passed 00:07:58.064 Test: blob_resize_test ...[2024-07-12 08:34:33.250192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:58.323 passed 00:07:58.323 Test: blob_resize_thin_test ...passed 00:07:58.323 Test: channel_ops ...passed 00:07:58.323 Test: blob_super ...passed 00:07:58.323 Test: blob_rw_verify_iov ...passed 00:07:58.323 Test: blob_unmap ...passed 00:07:58.582 Test: blob_iter ...passed 00:07:58.582 Test: blob_parse_md ...passed 00:07:58.582 Test: bs_load_pending_removal ...passed 00:07:58.582 Test: bs_unload ...[2024-07-12 08:34:33.630955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:58.582 passed 00:07:58.582 Test: bs_usable_clusters ...passed 00:07:58.582 Test: blob_crc ...[2024-07-12 08:34:33.714043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:58.582 [2024-07-12 08:34:33.714467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:58.582 passed 00:07:58.582 Test: blob_flags ...passed 00:07:58.841 Test: bs_version ...passed 00:07:58.841 Test: blob_set_xattrs_test ...[2024-07-12 08:34:33.835254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:58.841 [2024-07-12 08:34:33.835669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:58.841 passed 00:07:58.841 Test: blob_thin_prov_alloc ...passed 00:07:59.100 Test: blob_insert_cluster_msg_test ...passed 00:07:59.100 Test: blob_thin_prov_rw ...passed 00:07:59.100 Test: blob_thin_prov_rle ...passed 00:07:59.100 Test: blob_thin_prov_rw_iov ...passed 00:07:59.100 Test: blob_snapshot_rw ...passed 00:07:59.100 Test: blob_snapshot_rw_iov ...passed 00:07:59.358 Test: blob_inflate_rw ...passed 00:07:59.358 Test: blob_snapshot_freeze_io ...passed 00:07:59.617 Test: blob_operation_split_rw ...passed 00:07:59.875 Test: blob_operation_split_rw_iov ...passed 00:07:59.875 Test: blob_simultaneous_operations ...[2024-07-12 08:34:34.835107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.875 [2024-07-12 08:34:34.835483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.875 [2024-07-12 08:34:34.836136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.875 [2024-07-12 08:34:34.836369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.875 [2024-07-12 08:34:34.839162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.875 [2024-07-12 08:34:34.839342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.875 [2024-07-12 08:34:34.839657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.875 [2024-07-12 08:34:34.839808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.875 passed 00:07:59.875 Test: blob_persist_test ...passed 00:07:59.875 Test: blob_decouple_snapshot ...passed 00:07:59.875 Test: blob_seek_io_unit ...passed 00:07:59.875 Test: blob_nested_freezes ...passed 00:07:59.875 Test: blob_clone_resize ...passed 00:08:00.133 Test: blob_shallow_copy ...[2024-07-12 08:34:35.091382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:00.133 [2024-07-12 08:34:35.092026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:00.133 [2024-07-12 08:34:35.092494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:00.133 passed 00:08:00.133 Suite: blob_blob_copy_noextent 00:08:00.133 Test: blob_write ...passed 00:08:00.133 Test: blob_read ...passed 00:08:00.133 Test: blob_rw_verify ...passed 00:08:00.133 Test: blob_rw_verify_iov_nomem ...passed 00:08:00.133 Test: blob_rw_iov_read_only ...passed 00:08:00.448 Test: blob_xattr ...passed 00:08:00.448 Test: blob_dirty_shutdown ...passed 00:08:00.448 Test: blob_is_degraded ...passed 00:08:00.448 Suite: blob_esnap_bs_copy_noextent 00:08:00.448 Test: blob_esnap_create ...passed 00:08:00.448 Test: blob_esnap_thread_add_remove ...passed 00:08:00.448 Test: blob_esnap_clone_snapshot ...passed 00:08:00.448 Test: blob_esnap_clone_inflate ...passed 00:08:00.448 Test: blob_esnap_clone_decouple ...passed 00:08:00.722 Test: blob_esnap_clone_reload ...passed 00:08:00.722 Test: blob_esnap_hotplug ...passed 00:08:00.722 Test: blob_set_parent ...[2024-07-12 08:34:35.691158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:00.722 [2024-07-12 08:34:35.691478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:00.722 [2024-07-12 08:34:35.691756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:00.722 [2024-07-12 08:34:35.691926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:00.722 [2024-07-12 08:34:35.692566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:00.722 passed 00:08:00.722 Test: blob_set_external_parent ...[2024-07-12 08:34:35.732065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:00.722 [2024-07-12 08:34:35.732475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:00.722 [2024-07-12 08:34:35.732687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:00.722 [2024-07-12 08:34:35.733241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:00.722 passed 00:08:00.722 Suite: blob_copy_extent 00:08:00.722 Test: blob_init ...[2024-07-12 08:34:35.746479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:00.722 passed 00:08:00.722 Test: blob_thin_provision ...passed 00:08:00.722 Test: blob_read_only ...passed 00:08:00.722 Test: bs_load ...[2024-07-12 08:34:35.799634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:00.722 passed 00:08:00.722 Test: bs_load_custom_cluster_size ...passed 00:08:00.722 Test: bs_load_after_failed_grow ...passed 00:08:00.722 Test: bs_cluster_sz ...[2024-07-12 08:34:35.828392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:00.722 [2024-07-12 08:34:35.828754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:00.722 [2024-07-12 08:34:35.828999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:00.722 passed 00:08:00.722 Test: bs_resize_md ...passed 00:08:00.722 Test: bs_destroy ...passed 00:08:00.722 Test: bs_type ...passed 00:08:00.722 Test: bs_super_block ...passed 00:08:00.722 Test: bs_test_recover_cluster_count ...passed 00:08:00.722 Test: bs_grow_live ...passed 00:08:00.723 Test: bs_grow_live_no_space ...passed 00:08:00.981 Test: bs_test_grow ...passed 00:08:00.981 Test: blob_serialize_test ...passed 00:08:00.981 Test: super_block_crc ...passed 00:08:00.981 Test: blob_thin_prov_write_count_io ...passed 00:08:00.981 Test: blob_thin_prov_unmap_cluster ...passed 00:08:00.981 Test: bs_load_iter_test ...passed 00:08:00.981 Test: blob_relations ...[2024-07-12 08:34:36.018362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.981 [2024-07-12 08:34:36.018765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.981 [2024-07-12 08:34:36.019512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.981 [2024-07-12 08:34:36.019683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.981 passed 00:08:00.981 Test: blob_relations2 ...[2024-07-12 08:34:36.034081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.981 [2024-07-12 08:34:36.034353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.981 [2024-07-12 08:34:36.034565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.981 [2024-07-12 08:34:36.034695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.981 [2024-07-12 08:34:36.035862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.981 [2024-07-12 08:34:36.036050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.981 [2024-07-12 08:34:36.036578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.981 [2024-07-12 08:34:36.036802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.981 passed 00:08:00.981 Test: blob_relations3 ...passed 00:08:01.241 Test: blobstore_clean_power_failure ...passed 00:08:01.241 Test: blob_delete_snapshot_power_failure ...[2024-07-12 08:34:36.195097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:01.241 [2024-07-12 08:34:36.208341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:01.241 [2024-07-12 08:34:36.221993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:01.241 [2024-07-12 08:34:36.222290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.241 [2024-07-12 08:34:36.222483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.241 [2024-07-12 08:34:36.236209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:01.241 [2024-07-12 08:34:36.236554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:01.241 [2024-07-12 08:34:36.236726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.241 [2024-07-12 08:34:36.236893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.241 [2024-07-12 08:34:36.251113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:01.241 [2024-07-12 08:34:36.254451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:01.241 [2024-07-12 08:34:36.254630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.241 [2024-07-12 08:34:36.254783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.241 [2024-07-12 08:34:36.268587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:01.241 [2024-07-12 08:34:36.268928] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.241 [2024-07-12 08:34:36.281611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:01.241 [2024-07-12 08:34:36.281976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.241 [2024-07-12 08:34:36.294769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:01.241 [2024-07-12 08:34:36.295071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.241 passed 00:08:01.241 Test: blob_create_snapshot_power_failure ...[2024-07-12 08:34:36.332720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:01.241 [2024-07-12 08:34:36.347085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:01.241 [2024-07-12 08:34:36.376174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:01.241 [2024-07-12 08:34:36.391078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:01.500 passed 00:08:01.500 Test: blob_io_unit ...passed 00:08:01.500 Test: blob_io_unit_compatibility ...passed 00:08:01.500 Test: blob_ext_md_pages ...passed 00:08:01.500 Test: blob_esnap_io_4096_4096 ...passed 00:08:01.500 Test: blob_esnap_io_512_512 ...passed 00:08:01.500 Test: blob_esnap_io_4096_512 ...passed 00:08:01.500 Test: blob_esnap_io_512_4096 ...passed 00:08:01.500 Test: blob_esnap_clone_resize ...passed 00:08:01.500 Suite: blob_bs_copy_extent 00:08:01.500 Test: blob_open ...passed 00:08:01.500 Test: blob_create ...[2024-07-12 08:34:36.679369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:01.758 passed 00:08:01.758 Test: blob_create_loop ...passed 00:08:01.758 Test: blob_create_fail ...[2024-07-12 08:34:36.797918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:01.758 passed 00:08:01.758 Test: blob_create_internal ...passed 00:08:01.758 Test: blob_create_zero_extent ...passed 00:08:01.758 Test: blob_snapshot ...passed 00:08:02.016 Test: blob_clone ...passed 00:08:02.016 Test: blob_inflate ...[2024-07-12 08:34:37.000663] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:02.016 passed 00:08:02.016 Test: blob_delete ...passed 00:08:02.016 Test: blob_resize_test ...[2024-07-12 08:34:37.076013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:02.016 passed 00:08:02.016 Test: blob_resize_thin_test ...passed 00:08:02.016 Test: channel_ops ...passed 00:08:02.275 Test: blob_super ...passed 00:08:02.275 Test: blob_rw_verify_iov ...passed 00:08:02.275 Test: blob_unmap ...passed 00:08:02.275 Test: blob_iter ...passed 00:08:02.275 Test: blob_parse_md ...passed 00:08:02.275 Test: bs_load_pending_removal ...passed 00:08:02.275 Test: bs_unload ...[2024-07-12 08:34:37.408276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:02.275 passed 00:08:02.275 Test: bs_usable_clusters ...passed 00:08:02.534 Test: blob_crc ...[2024-07-12 08:34:37.482545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:02.534 [2024-07-12 08:34:37.482891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:02.534 passed 00:08:02.534 Test: blob_flags ...passed 00:08:02.534 Test: bs_version ...passed 00:08:02.534 Test: blob_set_xattrs_test ...[2024-07-12 08:34:37.605044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:02.534 [2024-07-12 08:34:37.605596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:02.534 passed 00:08:02.793 Test: blob_thin_prov_alloc ...passed 00:08:02.793 Test: blob_insert_cluster_msg_test ...passed 00:08:02.793 Test: blob_thin_prov_rw ...passed 00:08:02.793 Test: blob_thin_prov_rle ...passed 00:08:02.793 Test: blob_thin_prov_rw_iov ...passed 00:08:02.793 Test: blob_snapshot_rw ...passed 00:08:03.051 Test: blob_snapshot_rw_iov ...passed 00:08:03.051 Test: blob_inflate_rw ...passed 00:08:03.309 Test: blob_snapshot_freeze_io ...passed 00:08:03.309 Test: blob_operation_split_rw ...passed 00:08:03.566 Test: blob_operation_split_rw_iov ...passed 00:08:03.566 Test: blob_simultaneous_operations ...[2024-07-12 08:34:38.587253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.566 [2024-07-12 08:34:38.587627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.566 [2024-07-12 08:34:38.588174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.566 [2024-07-12 08:34:38.588375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.566 [2024-07-12 08:34:38.591073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.566 [2024-07-12 08:34:38.591288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.566 [2024-07-12 08:34:38.591449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:03.566 [2024-07-12 08:34:38.591596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:03.566 passed 00:08:03.566 Test: blob_persist_test ...passed 00:08:03.566 Test: blob_decouple_snapshot ...passed 00:08:03.566 Test: blob_seek_io_unit ...passed 00:08:03.824 Test: blob_nested_freezes ...passed 00:08:03.824 Test: blob_clone_resize ...passed 00:08:03.824 Test: blob_shallow_copy ...[2024-07-12 08:34:38.852422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:03.824 [2024-07-12 08:34:38.853045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:03.824 [2024-07-12 08:34:38.853437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:03.824 passed 00:08:03.824 Suite: blob_blob_copy_extent 00:08:03.824 Test: blob_write ...passed 00:08:03.824 Test: blob_read ...passed 00:08:03.824 Test: blob_rw_verify ...passed 00:08:04.081 Test: blob_rw_verify_iov_nomem ...passed 00:08:04.081 Test: blob_rw_iov_read_only ...passed 00:08:04.081 Test: blob_xattr ...passed 00:08:04.081 Test: blob_dirty_shutdown ...passed 00:08:04.081 Test: blob_is_degraded ...passed 00:08:04.081 Suite: blob_esnap_bs_copy_extent 00:08:04.081 Test: blob_esnap_create ...passed 00:08:04.081 Test: blob_esnap_thread_add_remove ...passed 00:08:04.338 Test: blob_esnap_clone_snapshot ...passed 00:08:04.338 Test: blob_esnap_clone_inflate ...passed 00:08:04.338 Test: blob_esnap_clone_decouple ...passed 00:08:04.338 Test: blob_esnap_clone_reload ...passed 00:08:04.338 Test: blob_esnap_hotplug ...passed 00:08:04.338 Test: blob_set_parent ...[2024-07-12 08:34:39.464197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:04.338 [2024-07-12 08:34:39.464591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:04.338 [2024-07-12 08:34:39.464837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:04.338 [2024-07-12 08:34:39.464977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:04.338 [2024-07-12 08:34:39.465564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:04.338 passed 00:08:04.338 Test: blob_set_external_parent ...[2024-07-12 08:34:39.503936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:04.338 [2024-07-12 08:34:39.504317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:04.338 [2024-07-12 08:34:39.504441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:04.338 [2024-07-12 08:34:39.504950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:04.338 passed 00:08:04.338 00:08:04.338 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.338 suites 16 16 n/a 0 0 00:08:04.338 tests 376 376 376 0 0 00:08:04.338 asserts 143965 143965 143965 0 n/a 00:08:04.338 00:08:04.338 Elapsed time = 16.101 seconds 00:08:04.597 08:34:39 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:04.597 00:08:04.597 00:08:04.597 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.597 http://cunit.sourceforge.net/ 00:08:04.597 00:08:04.597 00:08:04.597 Suite: blob_bdev 00:08:04.597 Test: create_bs_dev ...passed 00:08:04.597 Test: create_bs_dev_ro ...[2024-07-12 08:34:39.615725] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:04.597 passed 00:08:04.597 Test: create_bs_dev_rw ...passed 00:08:04.597 Test: claim_bs_dev ...[2024-07-12 08:34:39.616769] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:04.597 passed 00:08:04.597 Test: claim_bs_dev_ro ...passed 00:08:04.597 Test: deferred_destroy_refs ...passed 00:08:04.597 Test: deferred_destroy_channels ...passed 00:08:04.597 Test: deferred_destroy_threads ...passed 00:08:04.597 00:08:04.597 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.597 suites 1 1 n/a 0 0 00:08:04.597 tests 8 8 8 0 0 00:08:04.597 asserts 119 119 119 0 n/a 00:08:04.597 00:08:04.597 Elapsed time = 0.001 seconds 00:08:04.597 08:34:39 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:04.597 00:08:04.597 00:08:04.597 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.597 http://cunit.sourceforge.net/ 00:08:04.597 00:08:04.597 00:08:04.597 Suite: tree 00:08:04.597 Test: blobfs_tree_op_test ...passed 00:08:04.597 00:08:04.597 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.597 suites 1 1 n/a 0 0 00:08:04.597 tests 1 1 1 0 0 00:08:04.597 asserts 27 27 27 0 n/a 00:08:04.597 00:08:04.597 Elapsed time = 0.000 seconds 00:08:04.597 08:34:39 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:04.597 00:08:04.597 00:08:04.597 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.597 http://cunit.sourceforge.net/ 00:08:04.597 00:08:04.597 00:08:04.597 Suite: blobfs_async_ut 00:08:04.597 Test: fs_init ...passed 00:08:04.597 Test: fs_open ...passed 00:08:04.855 Test: fs_create ...passed 00:08:04.855 Test: fs_truncate ...passed 00:08:04.855 Test: fs_rename ...[2024-07-12 08:34:39.832697] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:04.855 passed 00:08:04.855 Test: fs_rw_async ...passed 00:08:04.855 Test: fs_writev_readv_async ...passed 00:08:04.855 Test: tree_find_buffer_ut ...passed 00:08:04.855 Test: channel_ops ...passed 00:08:04.855 Test: channel_ops_sync ...passed 00:08:04.855 00:08:04.855 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.855 suites 1 1 n/a 0 0 00:08:04.855 tests 10 10 10 0 0 00:08:04.855 asserts 292 292 292 0 n/a 00:08:04.855 00:08:04.855 Elapsed time = 0.194 seconds 00:08:04.855 08:34:39 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:04.855 00:08:04.855 00:08:04.855 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.855 http://cunit.sourceforge.net/ 00:08:04.855 00:08:04.855 00:08:04.855 Suite: blobfs_sync_ut 00:08:04.855 Test: cache_read_after_write ...[2024-07-12 08:34:40.038856] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:04.855 passed 00:08:05.113 Test: file_length ...passed 00:08:05.113 Test: append_write_to_extend_blob ...passed 00:08:05.113 Test: partial_buffer ...passed 00:08:05.113 Test: cache_write_null_buffer ...passed 00:08:05.113 Test: fs_create_sync ...passed 00:08:05.113 Test: fs_rename_sync ...passed 00:08:05.113 Test: cache_append_no_cache ...passed 00:08:05.113 Test: fs_delete_file_without_close ...passed 00:08:05.113 00:08:05.113 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.113 suites 1 1 n/a 0 0 00:08:05.113 tests 9 9 9 0 0 00:08:05.113 asserts 345 345 345 0 n/a 00:08:05.113 00:08:05.113 Elapsed time = 0.415 seconds 00:08:05.113 08:34:40 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:05.113 00:08:05.113 00:08:05.113 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.113 http://cunit.sourceforge.net/ 00:08:05.113 00:08:05.113 00:08:05.113 Suite: blobfs_bdev_ut 00:08:05.113 Test: spdk_blobfs_bdev_detect_test ...[2024-07-12 08:34:40.250735] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:05.113 passed 00:08:05.113 Test: spdk_blobfs_bdev_create_test ...[2024-07-12 08:34:40.251492] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:05.113 passed 00:08:05.113 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:05.113 00:08:05.113 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.113 suites 1 1 n/a 0 0 00:08:05.113 tests 3 3 3 0 0 00:08:05.113 asserts 9 9 9 0 n/a 00:08:05.113 00:08:05.113 Elapsed time = 0.001 seconds 00:08:05.113 ************************************ 00:08:05.113 END TEST unittest_blob_blobfs 00:08:05.113 ************************************ 00:08:05.113 00:08:05.113 real 0m17.067s 00:08:05.113 user 0m16.227s 00:08:05.113 sys 0m0.869s 00:08:05.113 08:34:40 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.113 08:34:40 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:08:05.372 08:34:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.372 08:34:40 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:08:05.372 08:34:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.372 08:34:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.372 08:34:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.372 ************************************ 00:08:05.372 START TEST unittest_event 00:08:05.372 ************************************ 00:08:05.372 08:34:40 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:08:05.372 08:34:40 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:05.372 00:08:05.372 00:08:05.372 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.372 http://cunit.sourceforge.net/ 00:08:05.372 00:08:05.372 00:08:05.372 Suite: app_suite 00:08:05.372 Test: test_spdk_app_parse_args ...app_ut: invalid option -- 'z' 00:08:05.372 app_ut [options] 00:08:05.372 00:08:05.372 CPU options: 00:08:05.372 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:05.372 (like [0,1,10]) 00:08:05.372 --lcores lcore to CPU mapping list. The list is in the format: 00:08:05.372 [<,lcores[@CPUs]>...] 00:08:05.372 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:05.372 Within the group, '-' is used for range separator, 00:08:05.372 ',' is used for single number separator. 00:08:05.372 '( )' can be omitted for single element group, 00:08:05.372 '@' can be omitted if cpus and lcores have the same value 00:08:05.372 --disable-cpumask-locks Disable CPU core lock files. 00:08:05.372 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:05.372 pollers in the app support interrupt mode) 00:08:05.372 -p, --main-core main (primary) core for DPDK 00:08:05.372 00:08:05.372 Configuration options: 00:08:05.372 -c, --config, --json JSON config file 00:08:05.372 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:05.372 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:05.372 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:05.372 --rpcs-allowed comma-separated list of permitted RPCS 00:08:05.372 --json-ignore-init-errors don't exit on invalid config entry 00:08:05.372 00:08:05.372 Memory options: 00:08:05.372 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:05.372 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:05.372 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:05.372 -R, --huge-unlink unlink huge files after initialization 00:08:05.372 -n, --mem-channels number of memory channels used for DPDK 00:08:05.372 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:05.372 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:05.372 --no-huge run without using hugepages 00:08:05.372 -i, --shm-id shared memory ID (optional) 00:08:05.373 -g, --single-file-segments force creating just one hugetlbfs file 00:08:05.373 00:08:05.373 PCI options: 00:08:05.373 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:05.373 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:05.373 -u, --no-pci disable PCI access 00:08:05.373 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:05.373 00:08:05.373 Log options: 00:08:05.373 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:05.373 --silence-noticelog disable notice level logging to stderr 00:08:05.373 00:08:05.373 Trace options: 00:08:05.373 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:05.373 setting 0 to disable trace (default 32768) 00:08:05.373 Tracepoints vary in size and can use more than one trace entry. 00:08:05.373 -e, --tpoint-group [:] 00:08:05.373 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:05.373 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:05.373 a tracepoint group. First tpoint inside a group can be enabled by 00:08:05.373 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:05.373 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:05.373 in /include/spdk_internal/trace_defs.h 00:08:05.373 00:08:05.373 Other options: 00:08:05.373 -h, --help show this usage 00:08:05.373 -v, --version print SPDK version 00:08:05.373 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:05.373 --env-context Opaque context for use of the env implementation 00:08:05.373 app_ut: unrecognized option '--test-long-opt' 00:08:05.373 app_ut [options] 00:08:05.373 00:08:05.373 CPU options: 00:08:05.373 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:05.373 (like [0,1,10]) 00:08:05.373 --lcores lcore to CPU mapping list. The list is in the format: 00:08:05.373 [<,lcores[@CPUs]>...] 00:08:05.373 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:05.373 Within the group, '-' is used for range separator, 00:08:05.373 ',' is used for single number separator. 00:08:05.373 '( )' can be omitted for single element group, 00:08:05.373 '@' can be omitted if cpus and lcores have the same value 00:08:05.373 --disable-cpumask-locks Disable CPU core lock files. 00:08:05.373 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:05.373 pollers in the app support interrupt mode) 00:08:05.373 -p, --main-core main (primary) core for DPDK 00:08:05.373 00:08:05.373 Configuration options: 00:08:05.373 -c, --config, --json JSON config file 00:08:05.373 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:05.373 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:05.373 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:05.373 --rpcs-allowed comma-separated list of permitted RPCS 00:08:05.373 --json-ignore-init-errors don't exit on invalid config entry 00:08:05.373 00:08:05.373 Memory options: 00:08:05.373 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:05.373 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:05.373 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:05.373 -R, --huge-unlink unlink huge files after initialization 00:08:05.373 -n, --mem-channels number of memory channels used for DPDK 00:08:05.373 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:05.373 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:05.373 --no-huge run without using hugepages 00:08:05.373 -i, --shm-id shared memory ID (optional) 00:08:05.373 -g, --single-file-segments force creating just one hugetlbfs file 00:08:05.373 00:08:05.373 PCI options: 00:08:05.373 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:05.373 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:05.373 -u, --no-pci disable PCI access 00:08:05.373 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:05.373 00:08:05.373 Log options: 00:08:05.373 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:05.373 --silence-noticelog disable notice level logging to stderr 00:08:05.373 00:08:05.373 Trace options: 00:08:05.373 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:05.373 setting 0 to disable trace (default 32768) 00:08:05.373 Tracepoints vary in size and can use more than one trace entry. 00:08:05.373 -e, --tpoint-group [:] 00:08:05.373 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:05.373 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:05.373 a tracepoint group. First tpoint inside a group can be enabled by 00:08:05.373 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:05.373 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:05.373 in /include/spdk_internal/trace_defs.h 00:08:05.373 00:08:05.373 Other options: 00:08:05.373 -h, --help show this usage 00:08:05.373 -v, --version print SPDK version 00:08:05.373 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:05.373 --env-context Opaque context for use of the env implementation 00:08:05.373 [2024-07-12 08:34:40.347914] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:05.373 [2024-07-12 08:34:40.348355] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1372:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:05.373 app_ut [options] 00:08:05.373 00:08:05.373 CPU options: 00:08:05.373 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:05.373 (like [0,1,10]) 00:08:05.373 --lcores lcore to CPU mapping list. The list is in the format: 00:08:05.373 [<,lcores[@CPUs]>...] 00:08:05.373 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:05.373 Within the group, '-' is used for range separator, 00:08:05.373 ',' is used for single number separator. 00:08:05.373 '( )' can be omitted for single element group, 00:08:05.373 '@' can be omitted if cpus and lcores have the same value 00:08:05.373 --disable-cpumask-locks Disable CPU core lock files. 00:08:05.373 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:05.373 pollers in the app support interrupt mode) 00:08:05.373 -p, --main-core main (primary) core for DPDK 00:08:05.373 00:08:05.373 Configuration options: 00:08:05.373 -c, --config, --json JSON config file 00:08:05.373 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:05.373 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:05.373 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:05.373 --rpcs-allowed comma-separated list of permitted RPCS 00:08:05.373 --json-ignore-init-errors don't exit on invalid config entry 00:08:05.373 00:08:05.373 Memory options: 00:08:05.373 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:05.373 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:05.373 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:05.373 -R, --huge-unlink unlink huge files after initialization 00:08:05.373 -n, --mem-channels number of memory channels used for DPDK 00:08:05.373 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:05.373 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:05.373 --no-huge run without using hugepages 00:08:05.373 -i, --shm-id shared memory ID (optional) 00:08:05.373 -g, --single-file-segments force creating just one hugetlbfs file 00:08:05.373 00:08:05.373 PCI options: 00:08:05.373 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:05.373 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:05.373 -u, --no-pci disable PCI access 00:08:05.373 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:05.373 00:08:05.373 Log options: 00:08:05.373 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:05.373 --silence-noticelog disable notice level logging to stderr 00:08:05.373 00:08:05.373 Trace options: 00:08:05.373 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:05.373 setting 0 to disable trace (default 32768) 00:08:05.373 Tracepoints vary in size and can use more than one trace entry. 00:08:05.373 -e, --tpoint-group [:] 00:08:05.373 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:05.373 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:05.373 a tracepoint group. First tpoint inside a group can be enabled by 00:08:05.373 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:05.373 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:05.373 in /include/spdk_internal/trace_defs.h 00:08:05.373 00:08:05.373 Other options: 00:08:05.373 -h, --help show this usage 00:08:05.373 -v, --version print SPDK version 00:08:05.373 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:05.373 --env-context Opaque context for use of the env implementation 00:08:05.373 passed 00:08:05.373 00:08:05.373 [2024-07-12 08:34:40.348632] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1277:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:05.373 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.373 suites 1 1 n/a 0 0 00:08:05.373 tests 1 1 1 0 0 00:08:05.373 asserts 8 8 8 0 n/a 00:08:05.373 00:08:05.373 Elapsed time = 0.002 seconds 00:08:05.373 08:34:40 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:05.374 00:08:05.374 00:08:05.374 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.374 http://cunit.sourceforge.net/ 00:08:05.374 00:08:05.374 00:08:05.374 Suite: app_suite 00:08:05.374 Test: test_create_reactor ...passed 00:08:05.374 Test: test_init_reactors ...passed 00:08:05.374 Test: test_event_call ...passed 00:08:05.374 Test: test_schedule_thread ...passed 00:08:05.374 Test: test_reschedule_thread ...passed 00:08:05.374 Test: test_bind_thread ...passed 00:08:05.374 Test: test_for_each_reactor ...passed 00:08:05.374 Test: test_reactor_stats ...passed 00:08:05.374 Test: test_scheduler ...passed 00:08:05.374 Test: test_governor ...passed 00:08:05.374 00:08:05.374 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.374 suites 1 1 n/a 0 0 00:08:05.374 tests 10 10 10 0 0 00:08:05.374 asserts 344 344 344 0 n/a 00:08:05.374 00:08:05.374 Elapsed time = 0.021 seconds 00:08:05.374 00:08:05.374 real 0m0.113s 00:08:05.374 user 0m0.048s 00:08:05.374 sys 0m0.056s 00:08:05.374 08:34:40 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.374 08:34:40 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:08:05.374 ************************************ 00:08:05.374 END TEST unittest_event 00:08:05.374 ************************************ 00:08:05.374 08:34:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:05.374 08:34:40 unittest -- unit/unittest.sh@235 -- # uname -s 00:08:05.374 08:34:40 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:08:05.374 08:34:40 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:08:05.374 08:34:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.374 08:34:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.374 08:34:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.374 ************************************ 00:08:05.374 START TEST unittest_ftl 00:08:05.374 ************************************ 00:08:05.374 08:34:40 unittest.unittest_ftl -- common/autotest_common.sh@1123 -- # unittest_ftl 00:08:05.374 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:05.374 00:08:05.374 00:08:05.374 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.374 http://cunit.sourceforge.net/ 00:08:05.374 00:08:05.374 00:08:05.374 Suite: ftl_band_suite 00:08:05.374 Test: test_band_block_offset_from_addr_base ...passed 00:08:05.631 Test: test_band_block_offset_from_addr_offset ...passed 00:08:05.631 Test: test_band_addr_from_block_offset ...passed 00:08:05.631 Test: test_band_set_addr ...passed 00:08:05.631 Test: test_invalidate_addr ...passed 00:08:05.631 Test: test_next_xfer_addr ...passed 00:08:05.631 00:08:05.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.631 suites 1 1 n/a 0 0 00:08:05.631 tests 6 6 6 0 0 00:08:05.631 asserts 30356 30356 30356 0 n/a 00:08:05.631 00:08:05.631 Elapsed time = 0.186 seconds 00:08:05.631 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:05.631 00:08:05.631 00:08:05.631 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.631 http://cunit.sourceforge.net/ 00:08:05.631 00:08:05.631 00:08:05.631 Suite: ftl_bitmap 00:08:05.631 Test: test_ftl_bitmap_create ...[2024-07-12 08:34:40.780246] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:05.631 [2024-07-12 08:34:40.780764] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:05.631 passed 00:08:05.631 Test: test_ftl_bitmap_get ...passed 00:08:05.631 Test: test_ftl_bitmap_set ...passed 00:08:05.631 Test: test_ftl_bitmap_clear ...passed 00:08:05.631 Test: test_ftl_bitmap_find_first_set ...passed 00:08:05.631 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:05.631 Test: test_ftl_bitmap_count_set ...passed 00:08:05.631 00:08:05.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.631 suites 1 1 n/a 0 0 00:08:05.631 tests 7 7 7 0 0 00:08:05.631 asserts 137 137 137 0 n/a 00:08:05.631 00:08:05.631 Elapsed time = 0.001 seconds 00:08:05.631 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:05.631 00:08:05.631 00:08:05.631 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.631 http://cunit.sourceforge.net/ 00:08:05.631 00:08:05.631 00:08:05.631 Suite: ftl_io_suite 00:08:05.631 Test: test_completion ...passed 00:08:05.631 Test: test_multiple_ios ...passed 00:08:05.631 00:08:05.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.631 suites 1 1 n/a 0 0 00:08:05.631 tests 2 2 2 0 0 00:08:05.631 asserts 47 47 47 0 n/a 00:08:05.631 00:08:05.631 Elapsed time = 0.003 seconds 00:08:05.889 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:05.889 00:08:05.889 00:08:05.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.889 http://cunit.sourceforge.net/ 00:08:05.889 00:08:05.889 00:08:05.889 Suite: ftl_mngt 00:08:05.889 Test: test_next_step ...passed 00:08:05.889 Test: test_continue_step ...passed 00:08:05.889 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:05.889 Test: test_fail_step ...passed 00:08:05.889 Test: test_mngt_call_and_call_rollback ...passed 00:08:05.889 Test: test_nested_process_failure ...passed 00:08:05.889 Test: test_call_init_success ...passed 00:08:05.889 Test: test_call_init_failure ...passed 00:08:05.889 00:08:05.889 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.889 suites 1 1 n/a 0 0 00:08:05.889 tests 8 8 8 0 0 00:08:05.889 asserts 196 196 196 0 n/a 00:08:05.889 00:08:05.889 Elapsed time = 0.002 seconds 00:08:05.889 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:05.889 00:08:05.889 00:08:05.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.889 http://cunit.sourceforge.net/ 00:08:05.889 00:08:05.889 00:08:05.889 Suite: ftl_mempool 00:08:05.889 Test: test_ftl_mempool_create ...passed 00:08:05.889 Test: test_ftl_mempool_get_put ...passed 00:08:05.889 00:08:05.889 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.889 suites 1 1 n/a 0 0 00:08:05.889 tests 2 2 2 0 0 00:08:05.889 asserts 36 36 36 0 n/a 00:08:05.889 00:08:05.889 Elapsed time = 0.000 seconds 00:08:05.889 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:05.889 00:08:05.889 00:08:05.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.889 http://cunit.sourceforge.net/ 00:08:05.889 00:08:05.889 00:08:05.889 Suite: ftl_addr64_suite 00:08:05.889 Test: test_addr_cached ...passed 00:08:05.889 00:08:05.889 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.889 suites 1 1 n/a 0 0 00:08:05.889 tests 1 1 1 0 0 00:08:05.889 asserts 1536 1536 1536 0 n/a 00:08:05.889 00:08:05.889 Elapsed time = 0.000 seconds 00:08:05.889 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:05.889 00:08:05.889 00:08:05.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.889 http://cunit.sourceforge.net/ 00:08:05.889 00:08:05.889 00:08:05.889 Suite: ftl_sb 00:08:05.889 Test: test_sb_crc_v2 ...passed 00:08:05.889 Test: test_sb_crc_v3 ...passed 00:08:05.889 Test: test_sb_v3_md_layout ...[2024-07-12 08:34:40.942675] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:05.889 [2024-07-12 08:34:40.943160] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:05.889 [2024-07-12 08:34:40.943335] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:05.889 [2024-07-12 08:34:40.943526] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:05.889 [2024-07-12 08:34:40.943688] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:05.889 [2024-07-12 08:34:40.943905] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:05.889 [2024-07-12 08:34:40.944046] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:05.889 [2024-07-12 08:34:40.944235] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:05.889 [2024-07-12 08:34:40.944474] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:05.889 [2024-07-12 08:34:40.944629] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:05.889 [2024-07-12 08:34:40.944796] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:05.889 passed 00:08:05.889 Test: test_sb_v5_md_layout ...passed 00:08:05.889 00:08:05.889 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.889 suites 1 1 n/a 0 0 00:08:05.889 tests 4 4 4 0 0 00:08:05.889 asserts 160 160 160 0 n/a 00:08:05.889 00:08:05.889 Elapsed time = 0.003 seconds 00:08:05.889 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:05.889 00:08:05.889 00:08:05.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.889 http://cunit.sourceforge.net/ 00:08:05.889 00:08:05.889 00:08:05.889 Suite: ftl_layout_upgrade 00:08:05.889 Test: test_l2p_upgrade ...passed 00:08:05.889 00:08:05.889 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.889 suites 1 1 n/a 0 0 00:08:05.889 tests 1 1 1 0 0 00:08:05.889 asserts 152 152 152 0 n/a 00:08:05.889 00:08:05.889 Elapsed time = 0.001 seconds 00:08:05.889 08:34:40 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:08:05.889 00:08:05.889 00:08:05.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.889 http://cunit.sourceforge.net/ 00:08:05.889 00:08:05.889 00:08:05.889 Suite: ftl_p2l_suite 00:08:05.889 Test: test_p2l_num_pages ...passed 00:08:06.457 Test: test_ckpt_issue ...passed 00:08:07.024 Test: test_persist_band_p2l ...passed 00:08:07.284 Test: test_clean_restore_p2l ...passed 00:08:08.659 Test: test_dirty_restore_p2l ...passed 00:08:08.659 00:08:08.659 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.659 suites 1 1 n/a 0 0 00:08:08.659 tests 5 5 5 0 0 00:08:08.659 asserts 10020 10020 10020 0 n/a 00:08:08.659 00:08:08.659 Elapsed time = 2.572 seconds 00:08:08.659 ************************************ 00:08:08.659 END TEST unittest_ftl 00:08:08.659 ************************************ 00:08:08.659 00:08:08.659 real 0m3.124s 00:08:08.659 user 0m1.029s 00:08:08.659 sys 0m2.073s 00:08:08.659 08:34:43 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.659 08:34:43 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:08.659 08:34:43 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:08.659 ************************************ 00:08:08.659 START TEST unittest_accel 00:08:08.659 ************************************ 00:08:08.659 08:34:43 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:08.659 00:08:08.659 00:08:08.659 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.659 http://cunit.sourceforge.net/ 00:08:08.659 00:08:08.659 00:08:08.659 Suite: accel_sequence 00:08:08.659 Test: test_sequence_fill_copy ...passed 00:08:08.659 Test: test_sequence_abort ...passed 00:08:08.659 Test: test_sequence_append_error ...passed 00:08:08.659 Test: test_sequence_completion_error ...[2024-07-12 08:34:43.697641] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1957:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f12029547c0 00:08:08.659 [2024-07-12 08:34:43.698135] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1957:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f12029547c0 00:08:08.659 [2024-07-12 08:34:43.698341] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1867:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f12029547c0 00:08:08.659 [2024-07-12 08:34:43.698511] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1867:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f12029547c0 00:08:08.659 passed 00:08:08.659 Test: test_sequence_decompress ...passed 00:08:08.659 Test: test_sequence_reverse ...passed 00:08:08.659 Test: test_sequence_copy_elision ...passed 00:08:08.659 Test: test_sequence_accel_buffers ...passed 00:08:08.659 Test: test_sequence_memory_domain ...[2024-07-12 08:34:43.711505] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1759:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:08.659 [2024-07-12 08:34:43.711815] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1798:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:08.659 passed 00:08:08.659 Test: test_sequence_module_memory_domain ...passed 00:08:08.659 Test: test_sequence_crypto ...passed 00:08:08.659 Test: test_sequence_driver ...[2024-07-12 08:34:43.719596] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1906:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f1201c037c0 using driver: ut 00:08:08.659 [2024-07-12 08:34:43.719840] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1970:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f1201c037c0 through driver: ut 00:08:08.659 passed 00:08:08.659 Test: test_sequence_same_iovs ...passed 00:08:08.659 Test: test_sequence_crc32 ...passed 00:08:08.659 Suite: accel 00:08:08.659 Test: test_spdk_accel_task_complete ...passed 00:08:08.659 Test: test_get_task ...passed 00:08:08.659 Test: test_spdk_accel_submit_copy ...passed 00:08:08.659 Test: test_spdk_accel_submit_dualcast ...[2024-07-12 08:34:43.726053] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:08.659 [2024-07-12 08:34:43.726380] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:08.659 passed 00:08:08.659 Test: test_spdk_accel_submit_compare ...passed 00:08:08.659 Test: test_spdk_accel_submit_fill ...passed 00:08:08.659 Test: test_spdk_accel_submit_crc32c ...passed 00:08:08.659 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:08.659 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:08.659 Test: test_spdk_accel_submit_xor ...passed 00:08:08.659 Test: test_spdk_accel_module_find_by_name ...passed 00:08:08.659 Test: test_spdk_accel_module_register ...passed 00:08:08.659 00:08:08.659 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.659 suites 2 2 n/a 0 0 00:08:08.659 tests 26 26 26 0 0 00:08:08.659 asserts 830 830 830 0 n/a 00:08:08.659 00:08:08.659 Elapsed time = 0.037 seconds 00:08:08.659 00:08:08.659 real 0m0.086s 00:08:08.659 user 0m0.045s 00:08:08.659 sys 0m0.035s 00:08:08.659 08:34:43 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.659 08:34:43 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.659 ************************************ 00:08:08.659 END TEST unittest_accel 00:08:08.659 ************************************ 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:08.659 08:34:43 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.659 08:34:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:08.659 ************************************ 00:08:08.659 START TEST unittest_ioat 00:08:08.659 ************************************ 00:08:08.659 08:34:43 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:08.659 00:08:08.659 00:08:08.659 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.659 http://cunit.sourceforge.net/ 00:08:08.659 00:08:08.659 00:08:08.659 Suite: ioat 00:08:08.659 Test: ioat_state_check ...passed 00:08:08.659 00:08:08.659 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.659 suites 1 1 n/a 0 0 00:08:08.659 tests 1 1 1 0 0 00:08:08.659 asserts 32 32 32 0 n/a 00:08:08.659 00:08:08.659 Elapsed time = 0.000 seconds 00:08:08.659 00:08:08.659 real 0m0.029s 00:08:08.659 user 0m0.015s 00:08:08.659 sys 0m0.014s 00:08:08.659 08:34:43 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.659 08:34:43 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:08:08.659 ************************************ 00:08:08.659 END TEST unittest_ioat 00:08:08.659 ************************************ 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:08.918 08:34:43 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:08.918 08:34:43 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:08.918 ************************************ 00:08:08.918 START TEST unittest_idxd_user 00:08:08.918 ************************************ 00:08:08.918 08:34:43 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:08.918 00:08:08.918 00:08:08.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.918 http://cunit.sourceforge.net/ 00:08:08.918 00:08:08.918 00:08:08.918 Suite: idxd_user 00:08:08.918 Test: test_idxd_wait_cmd ...[2024-07-12 08:34:43.905982] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:08.918 [2024-07-12 08:34:43.906393] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:08.918 passed 00:08:08.918 Test: test_idxd_reset_dev ...[2024-07-12 08:34:43.906834] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:08.918 passed[2024-07-12 08:34:43.906980] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:08.918 00:08:08.918 Test: test_idxd_group_config ...passed 00:08:08.918 Test: test_idxd_wq_config ...passed 00:08:08.918 00:08:08.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.918 suites 1 1 n/a 0 0 00:08:08.918 tests 4 4 4 0 0 00:08:08.918 asserts 20 20 20 0 n/a 00:08:08.918 00:08:08.918 Elapsed time = 0.001 seconds 00:08:08.918 00:08:08.918 real 0m0.033s 00:08:08.918 user 0m0.013s 00:08:08.918 sys 0m0.019s 00:08:08.918 08:34:43 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.918 08:34:43 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:08:08.918 ************************************ 00:08:08.918 END TEST unittest_idxd_user 00:08:08.918 ************************************ 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:08.918 08:34:43 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.918 08:34:43 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:08.918 ************************************ 00:08:08.918 START TEST unittest_iscsi 00:08:08.918 ************************************ 00:08:08.918 08:34:43 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:08:08.918 08:34:43 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:08.918 00:08:08.918 00:08:08.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.918 http://cunit.sourceforge.net/ 00:08:08.918 00:08:08.918 00:08:08.918 Suite: conn_suite 00:08:08.918 Test: read_task_split_in_order_case ...passed 00:08:08.918 Test: read_task_split_reverse_order_case ...passed 00:08:08.918 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:08.918 Test: process_non_read_task_completion_test ...passed 00:08:08.918 Test: free_tasks_on_connection ...passed 00:08:08.918 Test: free_tasks_with_queued_datain ...passed 00:08:08.918 Test: abort_queued_datain_task_test ...passed 00:08:08.918 Test: abort_queued_datain_tasks_test ...passed 00:08:08.918 00:08:08.918 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.918 suites 1 1 n/a 0 0 00:08:08.918 tests 8 8 8 0 0 00:08:08.918 asserts 230 230 230 0 n/a 00:08:08.918 00:08:08.918 Elapsed time = 0.000 seconds 00:08:08.918 08:34:44 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:08.918 00:08:08.918 00:08:08.918 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.918 http://cunit.sourceforge.net/ 00:08:08.918 00:08:08.918 00:08:08.918 Suite: iscsi_suite 00:08:08.918 Test: param_negotiation_test ...passed 00:08:08.918 Test: list_negotiation_test ...passed 00:08:08.918 Test: parse_valid_test ...passed 00:08:08.919 Test: parse_invalid_test ...[2024-07-12 08:34:44.027424] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:08.919 [2024-07-12 08:34:44.027864] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:08.919 [2024-07-12 08:34:44.028024] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:08:08.919 [2024-07-12 08:34:44.028193] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:08.919 [2024-07-12 08:34:44.028467] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:08.919 [2024-07-12 08:34:44.028642] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:08.919 [2024-07-12 08:34:44.028880] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:08.919 passed 00:08:08.919 00:08:08.919 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.919 suites 1 1 n/a 0 0 00:08:08.919 tests 4 4 4 0 0 00:08:08.919 asserts 161 161 161 0 n/a 00:08:08.919 00:08:08.919 Elapsed time = 0.005 seconds 00:08:08.919 08:34:44 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:08.919 00:08:08.919 00:08:08.919 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.919 http://cunit.sourceforge.net/ 00:08:08.919 00:08:08.919 00:08:08.919 Suite: iscsi_target_node_suite 00:08:08.919 Test: add_lun_test_cases ...[2024-07-12 08:34:44.057342] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:08.919 [2024-07-12 08:34:44.057773] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:08.919 [2024-07-12 08:34:44.057967] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:08.919 [2024-07-12 08:34:44.058124] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:08.919 [2024-07-12 08:34:44.058244] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:08.919 passed 00:08:08.919 Test: allow_any_allowed ...passed 00:08:08.919 Test: allow_ipv6_allowed ...passed 00:08:08.919 Test: allow_ipv6_denied ...passed 00:08:08.919 Test: allow_ipv6_invalid ...passed 00:08:08.919 Test: allow_ipv4_allowed ...passed 00:08:08.919 Test: allow_ipv4_denied ...passed 00:08:08.919 Test: allow_ipv4_invalid ...passed 00:08:08.919 Test: node_access_allowed ...passed 00:08:08.919 Test: node_access_denied_by_empty_netmask ...passed 00:08:08.919 Test: node_access_multi_initiator_groups_cases ...passed 00:08:08.919 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:08.919 Test: chap_param_test_cases ...[2024-07-12 08:34:44.060073] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:08.919 [2024-07-12 08:34:44.060219] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:08.919 [2024-07-12 08:34:44.060335] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:08.919 [2024-07-12 08:34:44.060390] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:08.919 [2024-07-12 08:34:44.060515] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:08.919 passed 00:08:08.919 00:08:08.919 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.919 suites 1 1 n/a 0 0 00:08:08.919 tests 13 13 13 0 0 00:08:08.919 asserts 50 50 50 0 n/a 00:08:08.919 00:08:08.919 Elapsed time = 0.001 seconds 00:08:08.919 08:34:44 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:08.919 00:08:08.919 00:08:08.919 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.919 http://cunit.sourceforge.net/ 00:08:08.919 00:08:08.919 00:08:08.919 Suite: iscsi_suite 00:08:08.919 Test: op_login_check_target_test ...[2024-07-12 08:34:44.094973] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:08:08.919 passed 00:08:08.919 Test: op_login_session_normal_test ...[2024-07-12 08:34:44.095611] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:08.919 [2024-07-12 08:34:44.095766] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:08.919 [2024-07-12 08:34:44.095894] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:08.919 [2024-07-12 08:34:44.096028] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:08.919 [2024-07-12 08:34:44.096213] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:08.919 [2024-07-12 08:34:44.096446] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:08.919 [2024-07-12 08:34:44.096603] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:08.919 passed 00:08:08.919 Test: maxburstlength_test ...[2024-07-12 08:34:44.097078] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:08.919 [2024-07-12 08:34:44.097262] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:08.919 passed 00:08:08.919 Test: underflow_for_read_transfer_test ...passed 00:08:08.919 Test: underflow_for_zero_read_transfer_test ...passed 00:08:08.919 Test: underflow_for_request_sense_test ...passed 00:08:08.919 Test: underflow_for_check_condition_test ...passed 00:08:08.919 Test: add_transfer_task_test ...passed 00:08:08.919 Test: get_transfer_task_test ...passed 00:08:08.919 Test: del_transfer_task_test ...passed 00:08:08.919 Test: clear_all_transfer_tasks_test ...passed 00:08:08.919 Test: build_iovs_test ...passed 00:08:08.919 Test: build_iovs_with_md_test ...passed 00:08:08.919 Test: pdu_hdr_op_login_test ...[2024-07-12 08:34:44.100771] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:08.919 [2024-07-12 08:34:44.100990] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:08.919 [2024-07-12 08:34:44.101210] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:08.919 passed 00:08:08.919 Test: pdu_hdr_op_text_test ...[2024-07-12 08:34:44.101573] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:08.919 [2024-07-12 08:34:44.101785] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:08.919 [2024-07-12 08:34:44.101926] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:08.919 passed 00:08:08.919 Test: pdu_hdr_op_logout_test ...[2024-07-12 08:34:44.102252] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:08.919 passed 00:08:08.919 Test: pdu_hdr_op_scsi_test ...[2024-07-12 08:34:44.102696] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:08.919 [2024-07-12 08:34:44.102841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:08.919 [2024-07-12 08:34:44.102987] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:08.919 [2024-07-12 08:34:44.103194] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:08.919 [2024-07-12 08:34:44.103396] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:08.919 [2024-07-12 08:34:44.103663] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:08.920 passed 00:08:08.920 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-12 08:34:44.104048] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:08.920 [2024-07-12 08:34:44.104219] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:08.920 passed 00:08:08.920 Test: pdu_hdr_op_nopout_test ...[2024-07-12 08:34:44.104763] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:08.920 [2024-07-12 08:34:44.104960] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:08.920 [2024-07-12 08:34:44.105116] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:08.920 [2024-07-12 08:34:44.105246] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:08.920 passed 00:08:08.920 Test: pdu_hdr_op_data_test ...[2024-07-12 08:34:44.105448] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:08.920 [2024-07-12 08:34:44.105752] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:08.920 [2024-07-12 08:34:44.105913] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:08.920 [2024-07-12 08:34:44.106075] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:08.920 [2024-07-12 08:34:44.106234] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:08.920 [2024-07-12 08:34:44.106425] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:08.920 [2024-07-12 08:34:44.106567] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:08.920 passed 00:08:08.920 Test: empty_text_with_cbit_test ...passed 00:08:09.178 Test: pdu_payload_read_test ...[2024-07-12 08:34:44.109184] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:09.178 passed 00:08:09.178 Test: data_out_pdu_sequence_test ...passed 00:08:09.178 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:09.178 00:08:09.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.178 suites 1 1 n/a 0 0 00:08:09.178 tests 24 24 24 0 0 00:08:09.178 asserts 150253 150253 150253 0 n/a 00:08:09.178 00:08:09.178 Elapsed time = 0.019 seconds 00:08:09.178 08:34:44 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:09.178 00:08:09.178 00:08:09.178 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.178 http://cunit.sourceforge.net/ 00:08:09.178 00:08:09.178 00:08:09.178 Suite: init_grp_suite 00:08:09.178 Test: create_initiator_group_success_case ...passed 00:08:09.178 Test: find_initiator_group_success_case ...passed 00:08:09.178 Test: register_initiator_group_twice_case ...passed 00:08:09.178 Test: add_initiator_name_success_case ...passed 00:08:09.178 Test: add_initiator_name_fail_case ...[2024-07-12 08:34:44.151464] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:09.178 passed 00:08:09.178 Test: delete_all_initiator_names_success_case ...passed 00:08:09.178 Test: add_netmask_success_case ...passed 00:08:09.178 Test: add_netmask_fail_case ...[2024-07-12 08:34:44.152495] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:09.178 passed 00:08:09.178 Test: delete_all_netmasks_success_case ...passed 00:08:09.178 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:09.178 Test: netmask_overwrite_all_to_any_case ...passed 00:08:09.178 Test: add_delete_initiator_names_case ...passed 00:08:09.178 Test: add_duplicated_initiator_names_case ...passed 00:08:09.178 Test: delete_nonexisting_initiator_names_case ...passed 00:08:09.178 Test: add_delete_netmasks_case ...passed 00:08:09.178 Test: add_duplicated_netmasks_case ...passed 00:08:09.178 Test: delete_nonexisting_netmasks_case ...passed 00:08:09.178 00:08:09.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.179 suites 1 1 n/a 0 0 00:08:09.179 tests 17 17 17 0 0 00:08:09.179 asserts 108 108 108 0 n/a 00:08:09.179 00:08:09.179 Elapsed time = 0.001 seconds 00:08:09.179 08:34:44 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:09.179 00:08:09.179 00:08:09.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.179 http://cunit.sourceforge.net/ 00:08:09.179 00:08:09.179 00:08:09.179 Suite: portal_grp_suite 00:08:09.179 Test: portal_create_ipv4_normal_case ...passed 00:08:09.179 Test: portal_create_ipv6_normal_case ...passed 00:08:09.179 Test: portal_create_ipv4_wildcard_case ...passed 00:08:09.179 Test: portal_create_ipv6_wildcard_case ...passed 00:08:09.179 Test: portal_create_twice_case ...[2024-07-12 08:34:44.189688] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:09.179 passed 00:08:09.179 Test: portal_grp_register_unregister_case ...passed 00:08:09.179 Test: portal_grp_register_twice_case ...passed 00:08:09.179 Test: portal_grp_add_delete_case ...passed 00:08:09.179 Test: portal_grp_add_delete_twice_case ...passed 00:08:09.179 00:08:09.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.179 suites 1 1 n/a 0 0 00:08:09.179 tests 9 9 9 0 0 00:08:09.179 asserts 44 44 44 0 n/a 00:08:09.179 00:08:09.179 Elapsed time = 0.004 seconds 00:08:09.179 ************************************ 00:08:09.179 END TEST unittest_iscsi 00:08:09.179 ************************************ 00:08:09.179 00:08:09.179 real 0m0.239s 00:08:09.179 user 0m0.119s 00:08:09.179 sys 0m0.105s 00:08:09.179 08:34:44 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.179 08:34:44 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:08:09.179 08:34:44 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:09.179 08:34:44 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:08:09.179 08:34:44 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.179 08:34:44 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.179 08:34:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:09.179 ************************************ 00:08:09.179 START TEST unittest_json 00:08:09.179 ************************************ 00:08:09.179 08:34:44 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:08:09.179 08:34:44 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:09.179 00:08:09.179 00:08:09.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.179 http://cunit.sourceforge.net/ 00:08:09.179 00:08:09.179 00:08:09.179 Suite: json 00:08:09.179 Test: test_parse_literal ...passed 00:08:09.179 Test: test_parse_string_simple ...passed 00:08:09.179 Test: test_parse_string_control_chars ...passed 00:08:09.179 Test: test_parse_string_utf8 ...passed 00:08:09.179 Test: test_parse_string_escapes_twochar ...passed 00:08:09.179 Test: test_parse_string_escapes_unicode ...passed 00:08:09.179 Test: test_parse_number ...passed 00:08:09.179 Test: test_parse_array ...passed 00:08:09.179 Test: test_parse_object ...passed 00:08:09.179 Test: test_parse_nesting ...passed 00:08:09.179 Test: test_parse_comment ...passed 00:08:09.179 00:08:09.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.179 suites 1 1 n/a 0 0 00:08:09.179 tests 11 11 11 0 0 00:08:09.179 asserts 1516 1516 1516 0 n/a 00:08:09.179 00:08:09.179 Elapsed time = 0.002 seconds 00:08:09.179 08:34:44 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:09.179 00:08:09.179 00:08:09.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.179 http://cunit.sourceforge.net/ 00:08:09.179 00:08:09.179 00:08:09.179 Suite: json 00:08:09.179 Test: test_strequal ...passed 00:08:09.179 Test: test_num_to_uint16 ...passed 00:08:09.179 Test: test_num_to_int32 ...passed 00:08:09.179 Test: test_num_to_uint64 ...passed 00:08:09.179 Test: test_decode_object ...passed 00:08:09.179 Test: test_decode_array ...passed 00:08:09.179 Test: test_decode_bool ...passed 00:08:09.179 Test: test_decode_uint16 ...passed 00:08:09.179 Test: test_decode_int32 ...passed 00:08:09.179 Test: test_decode_uint32 ...passed 00:08:09.179 Test: test_decode_uint64 ...passed 00:08:09.179 Test: test_decode_string ...passed 00:08:09.179 Test: test_decode_uuid ...passed 00:08:09.179 Test: test_find ...passed 00:08:09.179 Test: test_find_array ...passed 00:08:09.179 Test: test_iterating ...passed 00:08:09.179 Test: test_free_object ...passed 00:08:09.179 00:08:09.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.179 suites 1 1 n/a 0 0 00:08:09.179 tests 17 17 17 0 0 00:08:09.179 asserts 236 236 236 0 n/a 00:08:09.179 00:08:09.179 Elapsed time = 0.001 seconds 00:08:09.179 08:34:44 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:09.179 00:08:09.179 00:08:09.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.179 http://cunit.sourceforge.net/ 00:08:09.179 00:08:09.179 00:08:09.179 Suite: json 00:08:09.179 Test: test_write_literal ...passed 00:08:09.179 Test: test_write_string_simple ...passed 00:08:09.179 Test: test_write_string_escapes ...passed 00:08:09.179 Test: test_write_string_utf16le ...passed 00:08:09.179 Test: test_write_number_int32 ...passed 00:08:09.179 Test: test_write_number_uint32 ...passed 00:08:09.179 Test: test_write_number_uint128 ...passed 00:08:09.179 Test: test_write_string_number_uint128 ...passed 00:08:09.179 Test: test_write_number_int64 ...passed 00:08:09.179 Test: test_write_number_uint64 ...passed 00:08:09.179 Test: test_write_number_double ...passed 00:08:09.179 Test: test_write_uuid ...passed 00:08:09.179 Test: test_write_array ...passed 00:08:09.179 Test: test_write_object ...passed 00:08:09.179 Test: test_write_nesting ...passed 00:08:09.179 Test: test_write_val ...passed 00:08:09.179 00:08:09.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.179 suites 1 1 n/a 0 0 00:08:09.179 tests 16 16 16 0 0 00:08:09.179 asserts 918 918 918 0 n/a 00:08:09.179 00:08:09.179 Elapsed time = 0.005 seconds 00:08:09.179 08:34:44 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:09.437 00:08:09.437 00:08:09.437 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.437 http://cunit.sourceforge.net/ 00:08:09.437 00:08:09.437 00:08:09.437 Suite: jsonrpc 00:08:09.437 Test: test_parse_request ...passed 00:08:09.437 Test: test_parse_request_streaming ...passed 00:08:09.437 00:08:09.437 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.437 suites 1 1 n/a 0 0 00:08:09.437 tests 2 2 2 0 0 00:08:09.437 asserts 289 289 289 0 n/a 00:08:09.437 00:08:09.437 Elapsed time = 0.004 seconds 00:08:09.437 00:08:09.437 real 0m0.137s 00:08:09.437 user 0m0.074s 00:08:09.437 sys 0m0.056s 00:08:09.437 08:34:44 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.438 08:34:44 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:08:09.438 ************************************ 00:08:09.438 END TEST unittest_json 00:08:09.438 ************************************ 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:09.438 08:34:44 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:09.438 ************************************ 00:08:09.438 START TEST unittest_rpc 00:08:09.438 ************************************ 00:08:09.438 08:34:44 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:08:09.438 08:34:44 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:09.438 00:08:09.438 00:08:09.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.438 http://cunit.sourceforge.net/ 00:08:09.438 00:08:09.438 00:08:09.438 Suite: rpc 00:08:09.438 Test: test_jsonrpc_handler ...passed 00:08:09.438 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:09.438 Test: test_rpc_get_methods ...[2024-07-12 08:34:44.462665] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:09.438 passed 00:08:09.438 Test: test_rpc_spdk_get_version ...passed 00:08:09.438 Test: test_spdk_rpc_listen_close ...passed 00:08:09.438 Test: test_rpc_run_multiple_servers ...passed 00:08:09.438 00:08:09.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.438 suites 1 1 n/a 0 0 00:08:09.438 tests 6 6 6 0 0 00:08:09.438 asserts 23 23 23 0 n/a 00:08:09.438 00:08:09.438 Elapsed time = 0.001 seconds 00:08:09.438 00:08:09.438 real 0m0.034s 00:08:09.438 user 0m0.020s 00:08:09.438 sys 0m0.013s 00:08:09.438 08:34:44 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.438 08:34:44 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.438 ************************************ 00:08:09.438 END TEST unittest_rpc 00:08:09.438 ************************************ 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:09.438 08:34:44 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:09.438 ************************************ 00:08:09.438 START TEST unittest_notify 00:08:09.438 ************************************ 00:08:09.438 08:34:44 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:09.438 00:08:09.438 00:08:09.438 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.438 http://cunit.sourceforge.net/ 00:08:09.438 00:08:09.438 00:08:09.438 Suite: app_suite 00:08:09.438 Test: notify ...passed 00:08:09.438 00:08:09.438 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.438 suites 1 1 n/a 0 0 00:08:09.438 tests 1 1 1 0 0 00:08:09.438 asserts 13 13 13 0 n/a 00:08:09.438 00:08:09.438 Elapsed time = 0.000 seconds 00:08:09.438 00:08:09.438 real 0m0.031s 00:08:09.438 user 0m0.027s 00:08:09.438 sys 0m0.004s 00:08:09.438 08:34:44 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.438 08:34:44 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:08:09.438 ************************************ 00:08:09.438 END TEST unittest_notify 00:08:09.438 ************************************ 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:09.438 08:34:44 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.438 08:34:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:09.438 ************************************ 00:08:09.438 START TEST unittest_nvme 00:08:09.438 ************************************ 00:08:09.438 08:34:44 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:08:09.438 08:34:44 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:09.697 00:08:09.697 00:08:09.697 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.697 http://cunit.sourceforge.net/ 00:08:09.697 00:08:09.697 00:08:09.697 Suite: nvme 00:08:09.697 Test: test_opc_data_transfer ...passed 00:08:09.697 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:09.697 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:09.697 Test: test_trid_parse_and_compare ...[2024-07-12 08:34:44.632537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:09.697 [2024-07-12 08:34:44.632961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:09.697 [2024-07-12 08:34:44.633158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:09.697 [2024-07-12 08:34:44.633305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:09.697 [2024-07-12 08:34:44.633443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:08:09.697 [2024-07-12 08:34:44.633635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:09.697 passed 00:08:09.697 Test: test_trid_trtype_str ...passed 00:08:09.697 Test: test_trid_adrfam_str ...passed 00:08:09.697 Test: test_nvme_ctrlr_probe ...[2024-07-12 08:34:44.634411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:09.697 passed 00:08:09.697 Test: test_spdk_nvme_probe ...[2024-07-12 08:34:44.634778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:09.697 [2024-07-12 08:34:44.634942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:09.697 [2024-07-12 08:34:44.635151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:09.697 [2024-07-12 08:34:44.635295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:09.697 passed 00:08:09.697 Test: test_spdk_nvme_connect ...[2024-07-12 08:34:44.635464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:09.697 [2024-07-12 08:34:44.635977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:09.697 passed 00:08:09.697 Test: test_nvme_ctrlr_probe_internal ...[2024-07-12 08:34:44.636467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:09.697 [2024-07-12 08:34:44.636607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:09.697 passed 00:08:09.697 Test: test_nvme_init_controllers ...[2024-07-12 08:34:44.636925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:09.697 passed 00:08:09.697 Test: test_nvme_driver_init ...[2024-07-12 08:34:44.637293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:09.697 [2024-07-12 08:34:44.637428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:09.698 [2024-07-12 08:34:44.752053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:09.698 [2024-07-12 08:34:44.752513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:09.698 passed 00:08:09.698 Test: test_spdk_nvme_detach ...passed 00:08:09.698 Test: test_nvme_completion_poll_cb ...passed 00:08:09.698 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:09.698 Test: test_nvme_allocate_request_null ...passed 00:08:09.698 Test: test_nvme_allocate_request ...passed 00:08:09.698 Test: test_nvme_free_request ...passed 00:08:09.698 Test: test_nvme_allocate_request_user_copy ...passed 00:08:09.698 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:09.698 Test: test_nvme_request_check_timeout ...passed 00:08:09.698 Test: test_nvme_wait_for_completion ...passed 00:08:09.698 Test: test_spdk_nvme_parse_func ...passed 00:08:09.698 Test: test_spdk_nvme_detach_async ...passed 00:08:09.698 Test: test_nvme_parse_addr ...[2024-07-12 08:34:44.756603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:09.698 passed 00:08:09.698 00:08:09.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.698 suites 1 1 n/a 0 0 00:08:09.698 tests 25 25 25 0 0 00:08:09.698 asserts 326 326 326 0 n/a 00:08:09.698 00:08:09.698 Elapsed time = 0.007 seconds 00:08:09.698 08:34:44 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:09.698 00:08:09.698 00:08:09.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.698 http://cunit.sourceforge.net/ 00:08:09.698 00:08:09.698 00:08:09.698 Suite: nvme_ctrlr 00:08:09.698 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-12 08:34:44.796331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 passed 00:08:09.698 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-12 08:34:44.798357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 passed 00:08:09.698 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-12 08:34:44.799907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 passed 00:08:09.698 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-12 08:34:44.801419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 passed 00:08:09.698 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-12 08:34:44.802935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 [2024-07-12 08:34:44.804237] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 08:34:44.805669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 08:34:44.806998] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:09.698 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-12 08:34:44.809739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 [2024-07-12 08:34:44.812115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 08:34:44.813477] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:09.698 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-12 08:34:44.816370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 [2024-07-12 08:34:44.817634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 08:34:44.820170] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:09.698 Test: test_nvme_ctrlr_init_delay ...[2024-07-12 08:34:44.823079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 passed 00:08:09.698 Test: test_alloc_io_qpair_rr_1 ...[2024-07-12 08:34:44.824696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 [2024-07-12 08:34:44.824936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:09.698 [2024-07-12 08:34:44.825241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:09.698 [2024-07-12 08:34:44.825444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:09.698 [2024-07-12 08:34:44.825622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:09.698 passed 00:08:09.698 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:09.698 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:09.698 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-12 08:34:44.826424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 passed 00:08:09.698 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-12 08:34:44.826938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 [2024-07-12 08:34:44.827177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:09.698 passed 00:08:09.698 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-12 08:34:44.827787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:09.698 [2024-07-12 08:34:44.828068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:09.698 [2024-07-12 08:34:44.828339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:09.698 [2024-07-12 08:34:44.828562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:09.698 passed 00:08:09.698 Test: test_nvme_ctrlr_fail ...[2024-07-12 08:34:44.828961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:09.698 passed 00:08:09.698 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:09.698 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:09.698 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-12 08:34:44.829728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.698 passed 00:08:09.698 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:09.698 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-12 08:34:44.831738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.956 passed 00:08:09.956 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:09.956 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:09.956 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:09.956 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-12 08:34:45.089284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.956 passed 00:08:09.956 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-12 08:34:45.096794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.956 passed 00:08:09.956 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-12 08:34:45.098340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.956 [2024-07-12 08:34:45.098512] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3002:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:09.956 passed 00:08:09.957 Test: test_alloc_io_qpair_fail ...[2024-07-12 08:34:45.100000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.957 [2024-07-12 08:34:45.100205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:09.957 passed 00:08:09.957 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:09.957 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:08:09.957 Test: test_nvme_ctrlr_set_state ...[2024-07-12 08:34:45.101087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1546:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:09.957 passed 00:08:09.957 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-12 08:34:45.101469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:09.957 passed 00:08:09.957 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-12 08:34:45.122740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-12 08:34:45.159300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_reset ...[2024-07-12 08:34:45.161085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_aer_callback ...[2024-07-12 08:34:45.161727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-12 08:34:45.163429] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:10.216 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:10.216 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-12 08:34:45.165677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:10.216 Test: test_nvme_ctrlr_ana_resize ...[2024-07-12 08:34:45.167437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:10.216 Test: test_nvme_transport_ctrlr_ready ...[2024-07-12 08:34:45.169360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:10.216 [2024-07-12 08:34:45.169490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4204:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ctrlr_disable ...[2024-07-12 08:34:45.169637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:10.216 passed 00:08:10.216 00:08:10.216 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.216 suites 1 1 n/a 0 0 00:08:10.216 tests 44 44 44 0 0 00:08:10.216 asserts 10434 10434 10434 0 n/a 00:08:10.216 00:08:10.216 Elapsed time = 0.321 seconds 00:08:10.216 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:10.216 00:08:10.216 00:08:10.216 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.216 http://cunit.sourceforge.net/ 00:08:10.216 00:08:10.216 00:08:10.216 Suite: nvme_ctrlr_cmd 00:08:10.216 Test: test_get_log_pages ...passed 00:08:10.216 Test: test_set_feature_cmd ...passed 00:08:10.216 Test: test_set_feature_ns_cmd ...passed 00:08:10.216 Test: test_get_feature_cmd ...passed 00:08:10.216 Test: test_get_feature_ns_cmd ...passed 00:08:10.216 Test: test_abort_cmd ...passed 00:08:10.216 Test: test_set_host_id_cmds ...[2024-07-12 08:34:45.213637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:10.216 passed 00:08:10.216 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:10.216 Test: test_io_raw_cmd ...passed 00:08:10.216 Test: test_io_raw_cmd_with_md ...passed 00:08:10.216 Test: test_namespace_attach ...passed 00:08:10.216 Test: test_namespace_detach ...passed 00:08:10.216 Test: test_namespace_create ...passed 00:08:10.216 Test: test_namespace_delete ...passed 00:08:10.216 Test: test_doorbell_buffer_config ...passed 00:08:10.216 Test: test_format_nvme ...passed 00:08:10.216 Test: test_fw_commit ...passed 00:08:10.216 Test: test_fw_image_download ...passed 00:08:10.216 Test: test_sanitize ...passed 00:08:10.216 Test: test_directive ...passed 00:08:10.216 Test: test_nvme_request_add_abort ...passed 00:08:10.216 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:10.216 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:10.216 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:10.216 00:08:10.216 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.216 suites 1 1 n/a 0 0 00:08:10.216 tests 24 24 24 0 0 00:08:10.216 asserts 198 198 198 0 n/a 00:08:10.216 00:08:10.216 Elapsed time = 0.001 seconds 00:08:10.216 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:10.216 00:08:10.216 00:08:10.216 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.216 http://cunit.sourceforge.net/ 00:08:10.216 00:08:10.216 00:08:10.216 Suite: nvme_ctrlr_cmd 00:08:10.216 Test: test_geometry_cmd ...passed 00:08:10.216 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:10.216 00:08:10.216 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.216 suites 1 1 n/a 0 0 00:08:10.216 tests 2 2 2 0 0 00:08:10.216 asserts 7 7 7 0 n/a 00:08:10.216 00:08:10.216 Elapsed time = 0.000 seconds 00:08:10.216 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:10.216 00:08:10.216 00:08:10.216 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.216 http://cunit.sourceforge.net/ 00:08:10.216 00:08:10.216 00:08:10.216 Suite: nvme 00:08:10.216 Test: test_nvme_ns_construct ...passed 00:08:10.216 Test: test_nvme_ns_uuid ...passed 00:08:10.216 Test: test_nvme_ns_csi ...passed 00:08:10.216 Test: test_nvme_ns_data ...passed 00:08:10.216 Test: test_nvme_ns_set_identify_data ...passed 00:08:10.216 Test: test_spdk_nvme_ns_get_values ...passed 00:08:10.216 Test: test_spdk_nvme_ns_is_active ...passed 00:08:10.216 Test: spdk_nvme_ns_supports ...passed 00:08:10.216 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:10.216 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:10.216 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:10.216 Test: test_nvme_ns_find_id_desc ...passed 00:08:10.216 00:08:10.216 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.216 suites 1 1 n/a 0 0 00:08:10.216 tests 12 12 12 0 0 00:08:10.216 asserts 95 95 95 0 n/a 00:08:10.216 00:08:10.216 Elapsed time = 0.001 seconds 00:08:10.216 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:10.216 00:08:10.216 00:08:10.216 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.216 http://cunit.sourceforge.net/ 00:08:10.216 00:08:10.216 00:08:10.216 Suite: nvme_ns_cmd 00:08:10.216 Test: split_test ...passed 00:08:10.216 Test: split_test2 ...passed 00:08:10.216 Test: split_test3 ...passed 00:08:10.216 Test: split_test4 ...passed 00:08:10.216 Test: test_nvme_ns_cmd_flush ...passed 00:08:10.216 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:10.216 Test: test_nvme_ns_cmd_copy ...passed 00:08:10.216 Test: test_io_flags ...[2024-07-12 08:34:45.309132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:10.216 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:10.216 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:10.216 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:10.216 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:10.216 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:10.216 Test: test_cmd_child_request ...passed 00:08:10.216 Test: test_nvme_ns_cmd_readv ...passed 00:08:10.216 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:10.216 Test: test_nvme_ns_cmd_writev ...[2024-07-12 08:34:45.311293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:10.216 passed 00:08:10.216 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:10.216 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:10.217 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:10.217 Test: test_nvme_ns_cmd_comparev ...passed 00:08:10.217 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:10.217 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:10.217 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:10.217 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:10.217 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:10.217 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-12 08:34:45.314128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:10.217 passed 00:08:10.217 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-12 08:34:45.314466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:10.217 passed 00:08:10.217 Test: test_nvme_ns_cmd_verify ...passed 00:08:10.217 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:10.217 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:10.217 00:08:10.217 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.217 suites 1 1 n/a 0 0 00:08:10.217 tests 32 32 32 0 0 00:08:10.217 asserts 550 550 550 0 n/a 00:08:10.217 00:08:10.217 Elapsed time = 0.004 seconds 00:08:10.217 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:10.217 00:08:10.217 00:08:10.217 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.217 http://cunit.sourceforge.net/ 00:08:10.217 00:08:10.217 00:08:10.217 Suite: nvme_ns_cmd 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:10.217 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:10.217 00:08:10.217 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.217 suites 1 1 n/a 0 0 00:08:10.217 tests 12 12 12 0 0 00:08:10.217 asserts 123 123 123 0 n/a 00:08:10.217 00:08:10.217 Elapsed time = 0.001 seconds 00:08:10.217 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:10.217 00:08:10.217 00:08:10.217 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.217 http://cunit.sourceforge.net/ 00:08:10.217 00:08:10.217 00:08:10.217 Suite: nvme_qpair 00:08:10.217 Test: test3 ...passed 00:08:10.217 Test: test_ctrlr_failed ...passed 00:08:10.217 Test: struct_packing ...passed 00:08:10.217 Test: test_nvme_qpair_process_completions ...[2024-07-12 08:34:45.380096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:10.217 [2024-07-12 08:34:45.380452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:10.217 [2024-07-12 08:34:45.380607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:10.217 [2024-07-12 08:34:45.380766] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:10.217 passed 00:08:10.217 Test: test_nvme_completion_is_retry ...passed 00:08:10.217 Test: test_get_status_string ...passed 00:08:10.217 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:10.217 Test: test_nvme_qpair_submit_request ...passed 00:08:10.217 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:10.217 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:10.217 Test: test_nvme_qpair_init_deinit ...[2024-07-12 08:34:45.381916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:10.217 passed 00:08:10.217 Test: test_nvme_get_sgl_print_info ...passed 00:08:10.217 00:08:10.217 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.217 suites 1 1 n/a 0 0 00:08:10.217 tests 12 12 12 0 0 00:08:10.217 asserts 154 154 154 0 n/a 00:08:10.217 00:08:10.217 Elapsed time = 0.001 seconds 00:08:10.217 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:10.477 00:08:10.477 00:08:10.477 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.477 http://cunit.sourceforge.net/ 00:08:10.477 00:08:10.477 00:08:10.477 Suite: nvme_pcie 00:08:10.477 Test: test_prp_list_append ...[2024-07-12 08:34:45.412194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:10.477 [2024-07-12 08:34:45.412659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:10.477 [2024-07-12 08:34:45.412861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:10.477 [2024-07-12 08:34:45.413255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:10.477 [2024-07-12 08:34:45.413491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:10.477 passed 00:08:10.477 Test: test_nvme_pcie_hotplug_monitor ...passed 00:08:10.477 Test: test_shadow_doorbell_update ...passed 00:08:10.477 Test: test_build_contig_hw_sgl_request ...passed 00:08:10.477 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:10.477 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:10.477 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:10.477 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-12 08:34:45.414940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:10.477 passed 00:08:10.477 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:10.477 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:10.477 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-12 08:34:45.415692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:10.477 passed 00:08:10.477 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-12 08:34:45.416128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:10.477 passed 00:08:10.478 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-12 08:34:45.416348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:10.478 passed 00:08:10.478 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-12 08:34:45.416761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:10.478 passed 00:08:10.478 00:08:10.478 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.478 suites 1 1 n/a 0 0 00:08:10.478 tests 14 14 14 0 0 00:08:10.478 asserts 235 235 235 0 n/a 00:08:10.478 00:08:10.478 Elapsed time = 0.002 seconds 00:08:10.478 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:10.478 00:08:10.478 00:08:10.478 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.478 http://cunit.sourceforge.net/ 00:08:10.478 00:08:10.478 00:08:10.478 Suite: nvme_ns_cmd 00:08:10.478 Test: nvme_poll_group_create_test ...passed 00:08:10.478 Test: nvme_poll_group_add_remove_test ...passed 00:08:10.478 Test: nvme_poll_group_process_completions ...passed 00:08:10.478 Test: nvme_poll_group_destroy_test ...passed 00:08:10.478 Test: nvme_poll_group_get_free_stats ...passed 00:08:10.478 00:08:10.478 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.478 suites 1 1 n/a 0 0 00:08:10.478 tests 5 5 5 0 0 00:08:10.478 asserts 75 75 75 0 n/a 00:08:10.478 00:08:10.478 Elapsed time = 0.000 seconds 00:08:10.478 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:10.478 00:08:10.478 00:08:10.478 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.478 http://cunit.sourceforge.net/ 00:08:10.478 00:08:10.478 00:08:10.478 Suite: nvme_quirks 00:08:10.478 Test: test_nvme_quirks_striping ...passed 00:08:10.478 00:08:10.478 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.478 suites 1 1 n/a 0 0 00:08:10.478 tests 1 1 1 0 0 00:08:10.478 asserts 5 5 5 0 n/a 00:08:10.478 00:08:10.478 Elapsed time = 0.000 seconds 00:08:10.478 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:10.478 00:08:10.478 00:08:10.478 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.478 http://cunit.sourceforge.net/ 00:08:10.478 00:08:10.478 00:08:10.478 Suite: nvme_tcp 00:08:10.478 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:10.478 Test: test_nvme_tcp_build_iovs ...passed 00:08:10.478 Test: test_nvme_tcp_build_sgl_request ...[2024-07-12 08:34:45.514315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff10d2baa0, and the iovcnt=16, remaining_size=28672 00:08:10.478 passed 00:08:10.478 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:10.478 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:10.478 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:10.478 Test: test_nvme_tcp_req_get ...passed 00:08:10.478 Test: test_nvme_tcp_req_init ...passed 00:08:10.478 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:10.478 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:10.478 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-12 08:34:45.515864] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2d7e0 is same with the state(6) to be set 00:08:10.478 passed 00:08:10.478 Test: test_nvme_tcp_alloc_reqs ...passed 00:08:10.478 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-12 08:34:45.516431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2c990 is same with the state(5) to be set 00:08:10.478 passed 00:08:10.478 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-12 08:34:45.516723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff10d2d520 00:08:10.478 [2024-07-12 08:34:45.516849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:10.478 [2024-07-12 08:34:45.517031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.517164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:10.478 [2024-07-12 08:34:45.517268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.517403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:10.478 [2024-07-12 08:34:45.517456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.517621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.517687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.517828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.517952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.518101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2ce50 is same with the state(5) to be set 00:08:10.478 passed 00:08:10.478 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-12 08:34:45.518404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:10.478 [2024-07-12 08:34:45.518530] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:10.478 [2024-07-12 08:34:45.518799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:10.478 passed 00:08:10.478 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:10.478 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-12 08:34:45.519317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff10d2d060): PDU Sequence Error 00:08:10.478 passed 00:08:10.478 Test: test_nvme_tcp_icresp_handle ...[2024-07-12 08:34:45.519605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:10.478 [2024-07-12 08:34:45.519730] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:10.478 [2024-07-12 08:34:45.519853] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2c9a0 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.519972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:10.478 [2024-07-12 08:34:45.520030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2c9a0 is same with the state(5) to be set 00:08:10.478 [2024-07-12 08:34:45.520213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2c9a0 is same with the state(0) to be set 00:08:10.478 passed 00:08:10.479 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-12 08:34:45.520574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff10d2d520): PDU Sequence Error 00:08:10.479 passed 00:08:10.479 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-12 08:34:45.520899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff10d2bc60 00:08:10.479 passed 00:08:10.479 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:10.479 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-12 08:34:45.521460] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff10d2b2e0, errno=0, rc=0 00:08:10.479 [2024-07-12 08:34:45.521598] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2b2e0 is same with the state(5) to be set 00:08:10.479 [2024-07-12 08:34:45.521751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff10d2b2e0 is same with the state(5) to be set 00:08:10.479 [2024-07-12 08:34:45.521876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff10d2b2e0 (0): Success 00:08:10.479 [2024-07-12 08:34:45.521997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff10d2b2e0 (0): Success 00:08:10.479 passed 00:08:10.479 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-12 08:34:45.623356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:10.479 [2024-07-12 08:34:45.623684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:10.479 passed 00:08:10.479 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:10.479 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-12 08:34:45.624054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:10.479 [2024-07-12 08:34:45.624214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:10.479 passed 00:08:10.479 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-12 08:34:45.624640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:10.479 [2024-07-12 08:34:45.624773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:10.479 [2024-07-12 08:34:45.624967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:10.479 [2024-07-12 08:34:45.625111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:10.479 [2024-07-12 08:34:45.625283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:08:10.479 [2024-07-12 08:34:45.625435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:10.479 passed 00:08:10.479 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-12 08:34:45.625794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:08:10.479 [2024-07-12 08:34:45.625914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:10.479 passed 00:08:10.479 00:08:10.479 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.479 suites 1 1 n/a 0 0 00:08:10.479 tests 27 27 27 0 0 00:08:10.479 asserts 624 624 624 0 n/a 00:08:10.479 00:08:10.479 Elapsed time = 0.106 seconds 00:08:10.479 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:10.738 00:08:10.738 00:08:10.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.738 http://cunit.sourceforge.net/ 00:08:10.738 00:08:10.738 00:08:10.738 Suite: nvme_transport 00:08:10.738 Test: test_nvme_get_transport ...passed 00:08:10.738 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:10.738 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:10.738 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:10.738 Test: test_ctrlr_get_memory_domains ...passed 00:08:10.738 00:08:10.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.738 suites 1 1 n/a 0 0 00:08:10.738 tests 5 5 5 0 0 00:08:10.738 asserts 28 28 28 0 n/a 00:08:10.738 00:08:10.738 Elapsed time = 0.000 seconds 00:08:10.738 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:10.738 00:08:10.738 00:08:10.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.738 http://cunit.sourceforge.net/ 00:08:10.738 00:08:10.738 00:08:10.738 Suite: nvme_io_msg 00:08:10.738 Test: test_nvme_io_msg_send ...passed 00:08:10.738 Test: test_nvme_io_msg_process ...passed 00:08:10.738 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:10.738 00:08:10.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.738 suites 1 1 n/a 0 0 00:08:10.738 tests 3 3 3 0 0 00:08:10.738 asserts 56 56 56 0 n/a 00:08:10.738 00:08:10.738 Elapsed time = 0.000 seconds 00:08:10.738 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:10.738 00:08:10.738 00:08:10.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.738 http://cunit.sourceforge.net/ 00:08:10.738 00:08:10.738 00:08:10.738 Suite: nvme_pcie_common 00:08:10.738 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-12 08:34:45.733741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:10.738 passed 00:08:10.738 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:10.738 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:10.738 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-12 08:34:45.734977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:10.738 [2024-07-12 08:34:45.735197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:10.738 [2024-07-12 08:34:45.735359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:10.738 passed 00:08:10.738 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:08:10.738 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-12 08:34:45.736176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:10.738 [2024-07-12 08:34:45.736375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:10.738 passed 00:08:10.738 00:08:10.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.738 suites 1 1 n/a 0 0 00:08:10.738 tests 6 6 6 0 0 00:08:10.738 asserts 148 148 148 0 n/a 00:08:10.738 00:08:10.738 Elapsed time = 0.002 seconds 00:08:10.738 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:10.738 00:08:10.738 00:08:10.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.738 http://cunit.sourceforge.net/ 00:08:10.738 00:08:10.738 00:08:10.738 Suite: nvme_fabric 00:08:10.738 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:10.738 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:10.738 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:10.738 Test: test_nvme_fabric_discover_probe ...passed 00:08:10.738 Test: test_nvme_fabric_qpair_connect ...[2024-07-12 08:34:45.769456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:10.738 passed 00:08:10.738 00:08:10.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.738 suites 1 1 n/a 0 0 00:08:10.738 tests 5 5 5 0 0 00:08:10.738 asserts 60 60 60 0 n/a 00:08:10.738 00:08:10.738 Elapsed time = 0.001 seconds 00:08:10.738 08:34:45 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:10.738 00:08:10.738 00:08:10.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.738 http://cunit.sourceforge.net/ 00:08:10.738 00:08:10.738 00:08:10.738 Suite: nvme_opal 00:08:10.738 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:10.738 Test: test_opal_add_short_atom_header ...[2024-07-12 08:34:45.800905] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:10.738 passed 00:08:10.738 00:08:10.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.738 suites 1 1 n/a 0 0 00:08:10.738 tests 2 2 2 0 0 00:08:10.738 asserts 22 22 22 0 n/a 00:08:10.738 00:08:10.738 Elapsed time = 0.000 seconds 00:08:10.738 00:08:10.738 real 0m1.205s 00:08:10.738 user 0m0.594s 00:08:10.738 sys 0m0.412s 00:08:10.738 08:34:45 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.738 08:34:45 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.738 ************************************ 00:08:10.738 END TEST unittest_nvme 00:08:10.738 ************************************ 00:08:10.738 08:34:45 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:10.738 08:34:45 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:10.738 08:34:45 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.738 08:34:45 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.738 08:34:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:10.738 ************************************ 00:08:10.738 START TEST unittest_log 00:08:10.738 ************************************ 00:08:10.738 08:34:45 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:10.738 00:08:10.738 00:08:10.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.738 http://cunit.sourceforge.net/ 00:08:10.738 00:08:10.738 00:08:10.738 Suite: log 00:08:10.738 Test: log_test ...[2024-07-12 08:34:45.884911] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:08:10.738 [2024-07-12 08:34:45.885325] log_ut.c: 57:log_test: *DEBUG*: log test 00:08:10.738 log dump test: 00:08:10.738 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:10.738 spdk dump test: 00:08:10.738 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:10.738 spdk dump test: 00:08:10.738 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:10.739 00000010 65 20 63 68 61 72 73 e chars 00:08:10.739 passed 00:08:12.114 Test: deprecation ...passed 00:08:12.114 00:08:12.114 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.114 suites 1 1 n/a 0 0 00:08:12.114 tests 2 2 2 0 0 00:08:12.114 asserts 73 73 73 0 n/a 00:08:12.114 00:08:12.114 Elapsed time = 0.001 seconds 00:08:12.114 ************************************ 00:08:12.114 END TEST unittest_log 00:08:12.114 ************************************ 00:08:12.114 00:08:12.114 real 0m1.034s 00:08:12.114 user 0m0.028s 00:08:12.114 sys 0m0.005s 00:08:12.114 08:34:46 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.114 08:34:46 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:08:12.114 08:34:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:12.114 08:34:46 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:12.114 08:34:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.114 08:34:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.114 08:34:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:12.114 ************************************ 00:08:12.114 START TEST unittest_lvol 00:08:12.114 ************************************ 00:08:12.114 08:34:46 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:12.114 00:08:12.114 00:08:12.114 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.114 http://cunit.sourceforge.net/ 00:08:12.114 00:08:12.114 00:08:12.114 Suite: lvol 00:08:12.114 Test: lvs_init_unload_success ...[2024-07-12 08:34:46.978687] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:12.114 passed 00:08:12.114 Test: lvs_init_destroy_success ...[2024-07-12 08:34:46.979648] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:12.114 passed 00:08:12.114 Test: lvs_init_opts_success ...passed 00:08:12.114 Test: lvs_unload_lvs_is_null_fail ...[2024-07-12 08:34:46.980461] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:12.114 passed 00:08:12.114 Test: lvs_names ...[2024-07-12 08:34:46.980885] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:12.114 [2024-07-12 08:34:46.981074] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:12.114 [2024-07-12 08:34:46.981369] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:12.114 passed 00:08:12.114 Test: lvol_create_destroy_success ...passed 00:08:12.114 Test: lvol_create_fail ...[2024-07-12 08:34:46.982482] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:12.114 [2024-07-12 08:34:46.982741] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:12.114 passed 00:08:12.114 Test: lvol_destroy_fail ...[2024-07-12 08:34:46.983417] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:12.114 passed 00:08:12.114 Test: lvol_close ...[2024-07-12 08:34:46.983949] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:12.114 [2024-07-12 08:34:46.984110] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:12.114 passed 00:08:12.114 Test: lvol_resize ...passed 00:08:12.114 Test: lvol_set_read_only ...passed 00:08:12.114 Test: test_lvs_load ...[2024-07-12 08:34:46.985601] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:12.114 [2024-07-12 08:34:46.985756] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:12.114 passed 00:08:12.114 Test: lvols_load ...[2024-07-12 08:34:46.986245] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:12.114 [2024-07-12 08:34:46.986504] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:12.114 passed 00:08:12.114 Test: lvol_open ...passed 00:08:12.114 Test: lvol_snapshot ...passed 00:08:12.114 Test: lvol_snapshot_fail ...[2024-07-12 08:34:46.987890] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:12.114 passed 00:08:12.115 Test: lvol_clone ...passed 00:08:12.115 Test: lvol_clone_fail ...[2024-07-12 08:34:46.989040] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:12.115 passed 00:08:12.115 Test: lvol_iter_clones ...passed 00:08:12.115 Test: lvol_refcnt ...[2024-07-12 08:34:46.990198] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 5310ae4f-8ccf-4be8-b38b-e92c20cb8e32 because it is still open 00:08:12.115 passed 00:08:12.115 Test: lvol_names ...[2024-07-12 08:34:46.990686] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:12.115 [2024-07-12 08:34:46.990914] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:12.115 [2024-07-12 08:34:46.991272] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:12.115 passed 00:08:12.115 Test: lvol_create_thin_provisioned ...passed 00:08:12.115 Test: lvol_rename ...[2024-07-12 08:34:46.992240] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:12.115 [2024-07-12 08:34:46.992461] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:12.115 passed 00:08:12.115 Test: lvs_rename ...[2024-07-12 08:34:46.992998] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:12.115 passed 00:08:12.115 Test: lvol_inflate ...[2024-07-12 08:34:46.993490] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:12.115 passed 00:08:12.115 Test: lvol_decouple_parent ...[2024-07-12 08:34:46.993988] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:12.115 passed 00:08:12.115 Test: lvol_get_xattr ...passed 00:08:12.115 Test: lvol_esnap_reload ...passed 00:08:12.115 Test: lvol_esnap_create_bad_args ...[2024-07-12 08:34:46.995033] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:12.115 [2024-07-12 08:34:46.995203] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:12.115 [2024-07-12 08:34:46.995329] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:12.115 [2024-07-12 08:34:46.995572] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:12.115 [2024-07-12 08:34:46.995828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:12.115 passed 00:08:12.115 Test: lvol_esnap_create_delete ...passed 00:08:12.115 Test: lvol_esnap_load_esnaps ...[2024-07-12 08:34:46.996621] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:12.115 passed 00:08:12.115 Test: lvol_esnap_missing ...[2024-07-12 08:34:46.997085] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:12.115 [2024-07-12 08:34:46.997226] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:12.115 passed 00:08:12.115 Test: lvol_esnap_hotplug ... 00:08:12.115 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:12.115 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:12.115 [2024-07-12 08:34:46.998410] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 3ed666b0-da72-45f1-8849-3a5dad7a2bdf: failed to create esnap bs_dev: error -12 00:08:12.115 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:12.115 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:12.115 [2024-07-12 08:34:46.998946] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol cc295b57-5c03-4eac-9984-9a80a913b921: failed to create esnap bs_dev: error -12 00:08:12.115 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:12.115 [2024-07-12 08:34:46.999305] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol bc02dd7e-5a0d-4fa0-9389-2d8906d8d2df: failed to create esnap bs_dev: error -12 00:08:12.115 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:12.115 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:12.115 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:12.115 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:12.115 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:12.115 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:12.115 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:12.115 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:12.115 passed 00:08:12.115 Test: lvol_get_by ...passed 00:08:12.115 Test: lvol_shallow_copy ...[2024-07-12 08:34:47.001967] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:08:12.115 [2024-07-12 08:34:47.002131] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 0b883b2e-4884-4101-898d-c5bb8adb1eea shallow copy, ext_dev must not be NULL 00:08:12.115 passed 00:08:12.115 Test: lvol_set_parent ...[2024-07-12 08:34:47.002612] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:08:12.115 [2024-07-12 08:34:47.002766] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:08:12.115 passed 00:08:12.115 Test: lvol_set_external_parent ...[2024-07-12 08:34:47.003241] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:08:12.115 [2024-07-12 08:34:47.003402] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:08:12.115 [2024-07-12 08:34:47.003587] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:08:12.115 passed 00:08:12.115 00:08:12.115 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.115 suites 1 1 n/a 0 0 00:08:12.115 tests 37 37 37 0 0 00:08:12.115 asserts 1505 1505 1505 0 n/a 00:08:12.115 00:08:12.115 Elapsed time = 0.015 seconds 00:08:12.115 00:08:12.115 real 0m0.060s 00:08:12.115 user 0m0.035s 00:08:12.115 sys 0m0.014s 00:08:12.115 08:34:47 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.115 08:34:47 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.115 ************************************ 00:08:12.115 END TEST unittest_lvol 00:08:12.115 ************************************ 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:12.115 08:34:47 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:12.115 08:34:47 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:12.115 ************************************ 00:08:12.115 START TEST unittest_nvme_rdma 00:08:12.115 ************************************ 00:08:12.115 08:34:47 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:12.115 00:08:12.115 00:08:12.115 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.115 http://cunit.sourceforge.net/ 00:08:12.115 00:08:12.115 00:08:12.115 Suite: nvme_rdma 00:08:12.115 Test: test_nvme_rdma_build_sgl_request ...[2024-07-12 08:34:47.087658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:12.115 [2024-07-12 08:34:47.088245] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:12.115 [2024-07-12 08:34:47.088595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:12.115 Test: test_nvme_rdma_build_contig_request ...[2024-07-12 08:34:47.089070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:12.115 Test: test_nvme_rdma_create_reqs ...[2024-07-12 08:34:47.089636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_create_rsps ...[2024-07-12 08:34:47.090273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-12 08:34:47.090701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:12.115 [2024-07-12 08:34:47.090896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_poller_create ...passed 00:08:12.115 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-12 08:34:47.091308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_ctrlr_construct ...passed 00:08:12.115 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:12.115 Test: test_nvme_rdma_req_init ...passed 00:08:12.115 Test: test_nvme_rdma_validate_cm_event ...[2024-07-12 08:34:47.092483] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:12.115 [2024-07-12 08:34:47.092632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_qpair_init ...passed 00:08:12.115 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:12.115 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:12.115 Test: test_rdma_get_memory_translation ...[2024-07-12 08:34:47.093408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:12.115 [2024-07-12 08:34:47.093628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:12.115 passed 00:08:12.115 Test: test_get_rdma_qpair_from_wc ...passed 00:08:12.115 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:12.115 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-12 08:34:47.094270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:12.115 [2024-07-12 08:34:47.094404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:12.115 passed 00:08:12.115 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-12 08:34:47.094851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:12.115 [2024-07-12 08:34:47.095013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:12.115 [2024-07-12 08:34:47.095253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe8b70fd80 on poll group 0x60c000000040 00:08:12.115 [2024-07-12 08:34:47.095406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:12.115 [2024-07-12 08:34:47.095615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:12.115 [2024-07-12 08:34:47.095752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe8b70fd80 on poll group 0x60c000000040 00:08:12.115 [2024-07-12 08:34:47.095929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:12.115 passed 00:08:12.115 00:08:12.115 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.115 suites 1 1 n/a 0 0 00:08:12.115 tests 21 21 21 0 0 00:08:12.115 asserts 397 397 397 0 n/a 00:08:12.115 00:08:12.115 Elapsed time = 0.004 seconds 00:08:12.115 00:08:12.115 real 0m0.045s 00:08:12.115 user 0m0.024s 00:08:12.115 sys 0m0.015s 00:08:12.115 08:34:47 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.115 08:34:47 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:12.115 ************************************ 00:08:12.115 END TEST unittest_nvme_rdma 00:08:12.115 ************************************ 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:12.115 08:34:47 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:12.115 ************************************ 00:08:12.115 START TEST unittest_nvmf_transport 00:08:12.115 ************************************ 00:08:12.115 08:34:47 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:12.115 00:08:12.115 00:08:12.115 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.115 http://cunit.sourceforge.net/ 00:08:12.115 00:08:12.115 00:08:12.115 Suite: nvmf 00:08:12.115 Test: test_spdk_nvmf_transport_create ...[2024-07-12 08:34:47.188892] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:12.115 [2024-07-12 08:34:47.189443] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:12.115 [2024-07-12 08:34:47.189662] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:12.115 [2024-07-12 08:34:47.189946] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:12.115 passed 00:08:12.115 Test: test_nvmf_transport_poll_group_create ...passed 00:08:12.115 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-12 08:34:47.190869] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:12.115 [2024-07-12 08:34:47.191161] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:12.115 [2024-07-12 08:34:47.191297] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:12.115 passed 00:08:12.115 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:08:12.115 00:08:12.115 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.115 suites 1 1 n/a 0 0 00:08:12.115 tests 4 4 4 0 0 00:08:12.115 asserts 49 49 49 0 n/a 00:08:12.115 00:08:12.115 Elapsed time = 0.002 seconds 00:08:12.115 ************************************ 00:08:12.115 END TEST unittest_nvmf_transport 00:08:12.115 ************************************ 00:08:12.115 00:08:12.115 real 0m0.048s 00:08:12.115 user 0m0.024s 00:08:12.115 sys 0m0.022s 00:08:12.115 08:34:47 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.115 08:34:47 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:12.115 08:34:47 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.115 08:34:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:12.115 ************************************ 00:08:12.115 START TEST unittest_rdma 00:08:12.115 ************************************ 00:08:12.115 08:34:47 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:12.115 00:08:12.115 00:08:12.115 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.115 http://cunit.sourceforge.net/ 00:08:12.115 00:08:12.115 00:08:12.115 Suite: rdma_common 00:08:12.115 Test: test_spdk_rdma_pd ...[2024-07-12 08:34:47.273789] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:12.115 [2024-07-12 08:34:47.274394] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:12.115 passed 00:08:12.115 00:08:12.115 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.115 suites 1 1 n/a 0 0 00:08:12.115 tests 1 1 1 0 0 00:08:12.116 asserts 31 31 31 0 n/a 00:08:12.116 00:08:12.116 Elapsed time = 0.001 seconds 00:08:12.116 00:08:12.116 real 0m0.029s 00:08:12.116 user 0m0.017s 00:08:12.116 sys 0m0.010s 00:08:12.116 08:34:47 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.116 08:34:47 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:12.116 ************************************ 00:08:12.116 END TEST unittest_rdma 00:08:12.116 ************************************ 00:08:12.374 08:34:47 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:12.374 08:34:47 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:12.374 08:34:47 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:12.374 08:34:47 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.374 08:34:47 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.374 08:34:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:12.374 ************************************ 00:08:12.374 START TEST unittest_nvme_cuse 00:08:12.374 ************************************ 00:08:12.374 08:34:47 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:12.374 00:08:12.374 00:08:12.374 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.374 http://cunit.sourceforge.net/ 00:08:12.374 00:08:12.374 00:08:12.374 Suite: nvme_cuse 00:08:12.374 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:12.374 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:12.374 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:12.374 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:12.374 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:12.374 Test: test_cuse_nvme_submit_io ...[2024-07-12 08:34:47.361328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:12.374 passed 00:08:12.374 Test: test_cuse_nvme_reset ...[2024-07-12 08:34:47.361921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:12.374 passed 00:08:13.308 Test: test_nvme_cuse_stop ...passed 00:08:13.308 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:13.308 00:08:13.308 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.308 suites 1 1 n/a 0 0 00:08:13.308 tests 9 9 9 0 0 00:08:13.308 asserts 118 118 118 0 n/a 00:08:13.308 00:08:13.308 Elapsed time = 1.002 seconds 00:08:13.308 ************************************ 00:08:13.308 END TEST unittest_nvme_cuse 00:08:13.308 ************************************ 00:08:13.308 00:08:13.308 real 0m1.039s 00:08:13.308 user 0m0.582s 00:08:13.308 sys 0m0.453s 00:08:13.308 08:34:48 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.308 08:34:48 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:08:13.308 08:34:48 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:13.308 08:34:48 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:08:13.308 08:34:48 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.308 08:34:48 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.308 08:34:48 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:13.308 ************************************ 00:08:13.308 START TEST unittest_nvmf 00:08:13.308 ************************************ 00:08:13.308 08:34:48 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:08:13.308 08:34:48 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:13.308 00:08:13.308 00:08:13.308 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.308 http://cunit.sourceforge.net/ 00:08:13.308 00:08:13.308 00:08:13.308 Suite: nvmf 00:08:13.308 Test: test_get_log_page ...[2024-07-12 08:34:48.458681] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:13.308 passed 00:08:13.308 Test: test_process_fabrics_cmd ...[2024-07-12 08:34:48.459895] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:08:13.308 passed 00:08:13.308 Test: test_connect ...[2024-07-12 08:34:48.460983] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:13.308 [2024-07-12 08:34:48.461309] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:13.308 [2024-07-12 08:34:48.461599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:13.308 [2024-07-12 08:34:48.461872] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:13.308 [2024-07-12 08:34:48.462216] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:13.308 [2024-07-12 08:34:48.462513] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:13.308 [2024-07-12 08:34:48.462772] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:13.308 [2024-07-12 08:34:48.463111] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:13.309 [2024-07-12 08:34:48.463453] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:13.309 [2024-07-12 08:34:48.463767] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:13.309 [2024-07-12 08:34:48.464379] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:13.309 [2024-07-12 08:34:48.464746] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:13.309 [2024-07-12 08:34:48.465072] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:13.309 [2024-07-12 08:34:48.465397] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:13.309 [2024-07-12 08:34:48.465750] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:08:13.309 [2024-07-12 08:34:48.466143] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:08:13.309 [2024-07-12 08:34:48.466464] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:13.309 passed 00:08:13.309 Test: test_get_ns_id_desc_list ...passed 00:08:13.309 Test: test_identify_ns ...[2024-07-12 08:34:48.467407] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:13.309 [2024-07-12 08:34:48.467919] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:13.309 [2024-07-12 08:34:48.468282] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:13.309 passed 00:08:13.309 Test: test_identify_ns_iocs_specific ...[2024-07-12 08:34:48.468896] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:13.309 [2024-07-12 08:34:48.469378] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:13.309 passed 00:08:13.309 Test: test_reservation_write_exclusive ...passed 00:08:13.309 Test: test_reservation_exclusive_access ...passed 00:08:13.309 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:13.309 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:13.309 Test: test_reservation_notification_log_page ...passed 00:08:13.309 Test: test_get_dif_ctx ...passed 00:08:13.309 Test: test_set_get_features ...[2024-07-12 08:34:48.471319] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:13.309 [2024-07-12 08:34:48.471598] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:13.309 [2024-07-12 08:34:48.471858] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:13.309 [2024-07-12 08:34:48.472119] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:13.309 passed 00:08:13.309 Test: test_identify_ctrlr ...passed 00:08:13.309 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:13.309 Test: test_custom_admin_cmd ...passed 00:08:13.309 Test: test_fused_compare_and_write ...[2024-07-12 08:34:48.473573] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:13.309 [2024-07-12 08:34:48.473825] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:13.309 [2024-07-12 08:34:48.474135] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:13.309 passed 00:08:13.309 Test: test_multi_async_event_reqs ...passed 00:08:13.309 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:13.309 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:13.309 Test: test_multi_async_events ...passed 00:08:13.309 Test: test_rae ...passed 00:08:13.309 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:13.309 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:13.309 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-12 08:34:48.476229] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:08:13.309 [2024-07-12 08:34:48.476541] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:08:13.309 passed 00:08:13.309 Test: test_zcopy_read ...passed 00:08:13.309 Test: test_zcopy_write ...passed 00:08:13.309 Test: test_nvmf_property_set ...passed 00:08:13.309 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-12 08:34:48.477668] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:13.309 [2024-07-12 08:34:48.477915] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:13.309 passed 00:08:13.309 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-12 08:34:48.478350] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:13.309 [2024-07-12 08:34:48.478592] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:13.309 [2024-07-12 08:34:48.478885] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:13.309 passed 00:08:13.309 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:08:13.309 Test: test_nvmf_check_qpair_active ...[2024-07-12 08:34:48.479544] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:08:13.309 [2024-07-12 08:34:48.479808] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4744:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:08:13.309 [2024-07-12 08:34:48.480078] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:08:13.309 [2024-07-12 08:34:48.480365] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:08:13.309 [2024-07-12 08:34:48.480636] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:08:13.309 passed 00:08:13.309 00:08:13.309 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.309 suites 1 1 n/a 0 0 00:08:13.309 tests 32 32 32 0 0 00:08:13.309 asserts 977 977 977 0 n/a 00:08:13.309 00:08:13.309 Elapsed time = 0.009 seconds 00:08:13.568 08:34:48 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:13.568 00:08:13.568 00:08:13.568 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.568 http://cunit.sourceforge.net/ 00:08:13.568 00:08:13.568 00:08:13.568 Suite: nvmf 00:08:13.568 Test: test_get_rw_params ...passed 00:08:13.568 Test: test_get_rw_ext_params ...passed 00:08:13.568 Test: test_lba_in_range ...passed 00:08:13.568 Test: test_get_dif_ctx ...passed 00:08:13.568 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:13.568 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-12 08:34:48.517624] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:13.568 [2024-07-12 08:34:48.518088] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:13.568 [2024-07-12 08:34:48.518295] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:13.568 passed 00:08:13.568 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-12 08:34:48.518704] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:13.568 [2024-07-12 08:34:48.518919] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:13.568 passed 00:08:13.568 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-12 08:34:48.519357] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:13.568 [2024-07-12 08:34:48.519522] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:13.568 [2024-07-12 08:34:48.519731] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:13.568 [2024-07-12 08:34:48.519905] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:13.568 passed 00:08:13.568 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:13.568 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:13.568 00:08:13.568 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.568 suites 1 1 n/a 0 0 00:08:13.568 tests 10 10 10 0 0 00:08:13.568 asserts 159 159 159 0 n/a 00:08:13.568 00:08:13.568 Elapsed time = 0.002 seconds 00:08:13.568 08:34:48 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:13.568 00:08:13.568 00:08:13.568 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.568 http://cunit.sourceforge.net/ 00:08:13.568 00:08:13.568 00:08:13.568 Suite: nvmf 00:08:13.568 Test: test_discovery_log ...passed 00:08:13.568 Test: test_discovery_log_with_filters ...passed 00:08:13.568 00:08:13.568 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.569 suites 1 1 n/a 0 0 00:08:13.569 tests 2 2 2 0 0 00:08:13.569 asserts 238 238 238 0 n/a 00:08:13.569 00:08:13.569 Elapsed time = 0.003 seconds 00:08:13.569 08:34:48 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:13.569 00:08:13.569 00:08:13.569 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.569 http://cunit.sourceforge.net/ 00:08:13.569 00:08:13.569 00:08:13.569 Suite: nvmf 00:08:13.569 Test: nvmf_test_create_subsystem ...[2024-07-12 08:34:48.597026] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:13.569 [2024-07-12 08:34:48.597453] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:13.569 [2024-07-12 08:34:48.597788] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:13.569 [2024-07-12 08:34:48.598026] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:13.569 [2024-07-12 08:34:48.598180] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:13.569 [2024-07-12 08:34:48.598260] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:13.569 [2024-07-12 08:34:48.598426] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:13.569 [2024-07-12 08:34:48.598598] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:13.569 [2024-07-12 08:34:48.598751] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:13.569 [2024-07-12 08:34:48.598931] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:13.569 [2024-07-12 08:34:48.599020] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:13.569 [2024-07-12 08:34:48.599244] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:13.569 [2024-07-12 08:34:48.599553] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:13.569 [2024-07-12 08:34:48.599792] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:13.569 [2024-07-12 08:34:48.600041] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:13.569 [2024-07-12 08:34:48.600223] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:13.569 [2024-07-12 08:34:48.600547] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:13.569 [2024-07-12 08:34:48.600746] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:13.569 [2024-07-12 08:34:48.602606] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:13.569 [2024-07-12 08:34:48.603206] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:13.569 [2024-07-12 08:34:48.603673] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:13.569 [2024-07-12 08:34:48.604112] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:13.569 passed 00:08:13.569 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-12 08:34:48.605203] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:13.569 [2024-07-12 08:34:48.605505] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:13.569 passed 00:08:13.569 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-12 08:34:48.606282] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:08:13.569 passed 00:08:13.569 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:13.569 Test: test_spdk_nvmf_ns_visible ...[2024-07-12 08:34:48.607154] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:13.569 passed 00:08:13.569 Test: test_reservation_register ...[2024-07-12 08:34:48.608047] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 [2024-07-12 08:34:48.608436] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:13.569 passed 00:08:13.569 Test: test_reservation_register_with_ptpl ...passed 00:08:13.569 Test: test_reservation_acquire_preempt_1 ...[2024-07-12 08:34:48.610220] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 passed 00:08:13.569 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:13.569 Test: test_reservation_release ...[2024-07-12 08:34:48.612580] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 passed 00:08:13.569 Test: test_reservation_unregister_notification ...[2024-07-12 08:34:48.613320] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 passed 00:08:13.569 Test: test_reservation_release_notification ...[2024-07-12 08:34:48.613978] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 passed 00:08:13.569 Test: test_reservation_release_notification_write_exclusive ...[2024-07-12 08:34:48.614670] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 passed 00:08:13.569 Test: test_reservation_clear_notification ...[2024-07-12 08:34:48.615351] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 passed 00:08:13.569 Test: test_reservation_preempt_notification ...[2024-07-12 08:34:48.616043] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:13.569 passed 00:08:13.569 Test: test_spdk_nvmf_ns_event ...passed 00:08:13.569 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:13.569 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:13.569 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-12 08:34:48.617865] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:13.569 [2024-07-12 08:34:48.618165] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:08:13.569 passed 00:08:13.569 Test: test_nvmf_ns_reservation_report ...[2024-07-12 08:34:48.618757] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3465:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:13.569 passed 00:08:13.569 Test: test_nvmf_nqn_is_valid ...[2024-07-12 08:34:48.619258] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:13.569 [2024-07-12 08:34:48.619530] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:6972a1c8-b29f-4688-8011-66e91c57c33": uuid is not the correct length 00:08:13.569 [2024-07-12 08:34:48.619804] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:13.569 passed 00:08:13.569 Test: test_nvmf_ns_reservation_restore ...[2024-07-12 08:34:48.620363] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:13.569 passed 00:08:13.569 Test: test_nvmf_subsystem_state_change ...passed 00:08:13.570 Test: test_nvmf_reservation_custom_ops ...passed 00:08:13.570 00:08:13.570 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.570 suites 1 1 n/a 0 0 00:08:13.570 tests 24 24 24 0 0 00:08:13.570 asserts 499 499 499 0 n/a 00:08:13.570 00:08:13.570 Elapsed time = 0.012 seconds 00:08:13.570 08:34:48 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:13.570 00:08:13.570 00:08:13.570 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.570 http://cunit.sourceforge.net/ 00:08:13.570 00:08:13.570 00:08:13.570 Suite: nvmf 00:08:13.570 Test: test_nvmf_tcp_create ...[2024-07-12 08:34:48.686622] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:13.570 passed 00:08:13.570 Test: test_nvmf_tcp_destroy ...passed 00:08:13.570 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:13.828 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:13.828 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:13.828 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:13.828 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:13.828 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:08:13.828 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:13.828 Test: test_nvmf_tcp_icreq_handle ...passed 00:08:13.828 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:13.828 Test: test_nvmf_tcp_invalid_sgl ...passed 00:08:13.828 Test: test_nvmf_tcp_pdu_ch_handle ...passed 00:08:13.828 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-12 08:34:48.786631] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.828 [2024-07-12 08:34:48.786721] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.828 [2024-07-12 08:34:48.786817] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.828 [2024-07-12 08:34:48.786857] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.828 [2024-07-12 08:34:48.786892] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.828 [2024-07-12 08:34:48.786976] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:13.828 [2024-07-12 08:34:48.787070] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.828 [2024-07-12 08:34:48.787128] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.828 [2024-07-12 08:34:48.787162] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:13.828 [2024-07-12 08:34:48.787190] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.828 [2024-07-12 08:34:48.787212] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787247] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.787272] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787316] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.787374] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2517:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:13.829 [2024-07-12 08:34:48.787411] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787433] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba272f0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.787483] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc6ba28050 00:08:13.829 [2024-07-12 08:34:48.787577] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787623] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.787660] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2306:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc6ba277b0 00:08:13.829 [2024-07-12 08:34:48.787685] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787712] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.787739] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:13.829 [2024-07-12 08:34:48.787773] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787815] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.787853] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:13.829 [2024-07-12 08:34:48.787895] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787924] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.787951] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.787979] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.788033] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.788058] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.788096] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.788127] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.788165] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.788188] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.788247] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.788286] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 [2024-07-12 08:34:48.788340] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:13.829 [2024-07-12 08:34:48.788365] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6ba277b0 is same with the state(5) to be set 00:08:13.829 passed 00:08:13.829 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:08:13.829 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:08:13.829 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:08:13.829 00:08:13.829 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.829 suites 1 1 n/a 0 0 00:08:13.829 tests 17 17 17 0 0 00:08:13.829 asserts 222 222 222 0 n/a 00:08:13.829 00:08:13.829 Elapsed time = 0.145 seconds 00:08:13.829 [2024-07-12 08:34:48.807113] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:13.829 [2024-07-12 08:34:48.807183] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:13.829 [2024-07-12 08:34:48.807396] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:13.829 [2024-07-12 08:34:48.807425] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:13.829 [2024-07-12 08:34:48.807586] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:13.829 [2024-07-12 08:34:48.807614] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:13.829 08:34:48 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:13.829 00:08:13.829 00:08:13.829 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.829 http://cunit.sourceforge.net/ 00:08:13.829 00:08:13.829 00:08:13.829 Suite: nvmf 00:08:13.829 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:13.829 00:08:13.829 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.829 suites 1 1 n/a 0 0 00:08:13.829 tests 1 1 1 0 0 00:08:13.829 asserts 17 17 17 0 n/a 00:08:13.829 00:08:13.829 Elapsed time = 0.022 seconds 00:08:13.829 00:08:13.829 real 0m0.526s 00:08:13.829 user 0m0.219s 00:08:13.829 sys 0m0.277s 00:08:13.829 08:34:48 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.829 ************************************ 00:08:13.829 END TEST unittest_nvmf 00:08:13.829 ************************************ 00:08:13.829 08:34:48 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:08:13.829 08:34:48 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:13.829 08:34:48 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:13.829 08:34:48 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:13.829 08:34:49 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:13.829 08:34:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.829 08:34:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.829 08:34:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:13.829 ************************************ 00:08:13.829 START TEST unittest_nvmf_rdma 00:08:13.829 ************************************ 00:08:13.829 08:34:49 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:14.091 00:08:14.091 00:08:14.091 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.091 http://cunit.sourceforge.net/ 00:08:14.091 00:08:14.091 00:08:14.091 Suite: nvmf 00:08:14.091 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-12 08:34:49.035375] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:14.091 passed 00:08:14.091 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:14.091 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:14.091 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:14.091 Test: test_nvmf_rdma_opts_init ...passed 00:08:14.091 Test: test_nvmf_rdma_request_free_data ...passed 00:08:14.091 Test: test_nvmf_rdma_resources_create ...[2024-07-12 08:34:49.035726] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:14.091 [2024-07-12 08:34:49.035784] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:14.091 passed 00:08:14.091 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:14.091 Test: test_nvmf_rdma_resize_cq ...[2024-07-12 08:34:49.038475] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:14.091 Using CQ of insufficient size may lead to CQ overrun 00:08:14.091 passed 00:08:14.091 00:08:14.091 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.091 suites 1 1 n/a 0 0 00:08:14.091 tests 9 9 9 0 0 00:08:14.091 asserts 579 579 579 0 n/a 00:08:14.091 00:08:14.091 Elapsed time = 0.003 seconds 00:08:14.091 [2024-07-12 08:34:49.038589] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:14.091 [2024-07-12 08:34:49.038650] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:14.091 00:08:14.091 real 0m0.044s 00:08:14.091 user 0m0.031s 00:08:14.091 sys 0m0.014s 00:08:14.091 08:34:49 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.091 ************************************ 00:08:14.091 END TEST unittest_nvmf_rdma 00:08:14.091 ************************************ 00:08:14.091 08:34:49 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:14.091 08:34:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:14.091 08:34:49 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:14.091 08:34:49 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:08:14.091 08:34:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.091 08:34:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.091 08:34:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.091 ************************************ 00:08:14.091 START TEST unittest_scsi 00:08:14.091 ************************************ 00:08:14.091 08:34:49 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:08:14.091 08:34:49 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:14.091 00:08:14.091 00:08:14.091 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.091 http://cunit.sourceforge.net/ 00:08:14.091 00:08:14.091 00:08:14.091 Suite: dev_suite 00:08:14.091 Test: dev_destruct_null_dev ...passed 00:08:14.091 Test: dev_destruct_zero_luns ...passed 00:08:14.091 Test: dev_destruct_null_lun ...passed 00:08:14.091 Test: dev_destruct_success ...passed 00:08:14.091 Test: dev_construct_num_luns_zero ...[2024-07-12 08:34:49.128621] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:14.091 passed 00:08:14.091 Test: dev_construct_no_lun_zero ...passed 00:08:14.091 Test: dev_construct_null_lun ...passed 00:08:14.091 Test: dev_construct_name_too_long ...passed 00:08:14.091 Test: dev_construct_success ...passed 00:08:14.091 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:14.091 Test: dev_queue_mgmt_task_success ...passed 00:08:14.091 Test: dev_queue_task_success ...passed 00:08:14.091 Test: dev_stop_success ...passed 00:08:14.091 Test: dev_add_port_max_ports ...passed 00:08:14.091 Test: dev_add_port_construct_failure1 ...passed 00:08:14.091 Test: dev_add_port_construct_failure2 ...passed 00:08:14.091 Test: dev_add_port_success1 ...passed 00:08:14.091 Test: dev_add_port_success2 ...passed 00:08:14.091 Test: dev_add_port_success3 ...passed 00:08:14.091 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:14.091 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:14.091 Test: dev_find_port_by_id_success ...passed 00:08:14.091 Test: dev_add_lun_bdev_not_found ...passed 00:08:14.091 Test: dev_add_lun_no_free_lun_id ...passed 00:08:14.091 Test: dev_add_lun_success1 ...passed 00:08:14.091 Test: dev_add_lun_success2 ...passed 00:08:14.091 Test: dev_check_pending_tasks ...passed 00:08:14.092 Test: dev_iterate_luns ...passed 00:08:14.092 Test: dev_find_free_lun ...passed 00:08:14.092 00:08:14.092 [2024-07-12 08:34:49.128971] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:14.092 [2024-07-12 08:34:49.129014] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:14.092 [2024-07-12 08:34:49.129048] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:14.092 [2024-07-12 08:34:49.129313] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:14.092 [2024-07-12 08:34:49.129399] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:14.092 [2024-07-12 08:34:49.129487] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:14.092 [2024-07-12 08:34:49.129871] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:14.092 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.092 suites 1 1 n/a 0 0 00:08:14.092 tests 29 29 29 0 0 00:08:14.092 asserts 97 97 97 0 n/a 00:08:14.092 00:08:14.092 Elapsed time = 0.002 seconds 00:08:14.092 08:34:49 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:14.092 00:08:14.092 00:08:14.092 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.092 http://cunit.sourceforge.net/ 00:08:14.092 00:08:14.092 00:08:14.092 Suite: lun_suite 00:08:14.092 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-12 08:34:49.162336] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:14.092 passed 00:08:14.092 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:08:14.092 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:14.092 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:14.092 Test: lun_task_mgmt_execute_invalid_case ...passed 00:08:14.092 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:14.092 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:14.092 Test: lun_append_task_null_lun_not_supported ...passed 00:08:14.092 Test: lun_execute_scsi_task_pending ...passed 00:08:14.092 Test: lun_execute_scsi_task_complete ...passed 00:08:14.092 Test: lun_execute_scsi_task_resize ...passed 00:08:14.092 Test: lun_destruct_success ...passed 00:08:14.092 Test: lun_construct_null_ctx ...passed 00:08:14.092 Test: lun_construct_success ...passed 00:08:14.092 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:14.092 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:14.092 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:14.092 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:14.092 00:08:14.092 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.092 suites 1 1 n/a 0 0 00:08:14.092 tests 18 18 18 0 0 00:08:14.092 asserts 153 153 153 0 n/a 00:08:14.092 00:08:14.092 Elapsed time = 0.001 seconds 00:08:14.092 [2024-07-12 08:34:49.162689] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:14.092 [2024-07-12 08:34:49.162847] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:14.092 [2024-07-12 08:34:49.163047] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:14.092 08:34:49 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:14.092 00:08:14.092 00:08:14.092 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.092 http://cunit.sourceforge.net/ 00:08:14.092 00:08:14.092 00:08:14.092 Suite: scsi_suite 00:08:14.092 Test: scsi_init ...passed 00:08:14.092 00:08:14.092 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.092 suites 1 1 n/a 0 0 00:08:14.092 tests 1 1 1 0 0 00:08:14.092 asserts 1 1 1 0 n/a 00:08:14.092 00:08:14.092 Elapsed time = 0.000 seconds 00:08:14.092 08:34:49 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:14.092 00:08:14.092 00:08:14.092 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.092 http://cunit.sourceforge.net/ 00:08:14.092 00:08:14.092 00:08:14.092 Suite: translation_suite 00:08:14.092 Test: mode_select_6_test ...passed 00:08:14.092 Test: mode_select_6_test2 ...passed 00:08:14.092 Test: mode_sense_6_test ...passed 00:08:14.092 Test: mode_sense_10_test ...passed 00:08:14.092 Test: inquiry_evpd_test ...passed 00:08:14.092 Test: inquiry_standard_test ...passed 00:08:14.092 Test: inquiry_overflow_test ...passed 00:08:14.092 Test: task_complete_test ...passed 00:08:14.092 Test: lba_range_test ...passed 00:08:14.092 Test: xfer_len_test ...[2024-07-12 08:34:49.236873] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:14.092 passed 00:08:14.092 Test: xfer_test ...passed 00:08:14.092 Test: scsi_name_padding_test ...passed 00:08:14.092 Test: get_dif_ctx_test ...passed 00:08:14.092 Test: unmap_split_test ...passed 00:08:14.092 00:08:14.092 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.092 suites 1 1 n/a 0 0 00:08:14.092 tests 14 14 14 0 0 00:08:14.092 asserts 1205 1205 1205 0 n/a 00:08:14.092 00:08:14.092 Elapsed time = 0.004 seconds 00:08:14.092 08:34:49 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:14.092 00:08:14.092 00:08:14.092 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.092 http://cunit.sourceforge.net/ 00:08:14.092 00:08:14.092 00:08:14.092 Suite: reservation_suite 00:08:14.092 Test: test_reservation_register ...[2024-07-12 08:34:49.265366] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.092 passed 00:08:14.092 Test: test_reservation_reserve ...passed 00:08:14.092 Test: test_all_registrant_reservation_reserve ...passed 00:08:14.092 Test: test_all_registrant_reservation_access ...passed 00:08:14.092 Test: test_reservation_preempt_non_all_regs ...passed 00:08:14.092 Test: test_reservation_preempt_all_regs ...passed 00:08:14.092 Test: test_reservation_cmds_conflict ...passed 00:08:14.092 Test: test_scsi2_reserve_release ...passed 00:08:14.092 Test: test_pr_with_scsi2_reserve_release ...passed 00:08:14.092 00:08:14.093 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.093 suites 1 1 n/a 0 0 00:08:14.093 tests 9 9 9 0 0 00:08:14.093 asserts 344 344 344 0 n/a 00:08:14.093 00:08:14.093 Elapsed time = 0.001 seconds 00:08:14.093 [2024-07-12 08:34:49.265733] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.093 [2024-07-12 08:34:49.265808] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:14.093 [2024-07-12 08:34:49.265901] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:14.093 [2024-07-12 08:34:49.265970] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.093 [2024-07-12 08:34:49.266091] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.093 [2024-07-12 08:34:49.266141] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:08:14.093 [2024-07-12 08:34:49.266193] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:08:14.093 [2024-07-12 08:34:49.266246] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.093 [2024-07-12 08:34:49.266292] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:14.093 [2024-07-12 08:34:49.266394] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.093 [2024-07-12 08:34:49.266488] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.093 [2024-07-12 08:34:49.266534] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:14.093 [2024-07-12 08:34:49.266579] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:14.093 [2024-07-12 08:34:49.266600] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:14.093 [2024-07-12 08:34:49.266625] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:14.093 [2024-07-12 08:34:49.266645] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:14.093 [2024-07-12 08:34:49.266715] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:14.362 00:08:14.362 real 0m0.172s 00:08:14.362 user 0m0.093s 00:08:14.362 sys 0m0.079s 00:08:14.362 ************************************ 00:08:14.362 08:34:49 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.362 08:34:49 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:08:14.362 END TEST unittest_scsi 00:08:14.362 ************************************ 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:14.362 08:34:49 unittest -- unit/unittest.sh@278 -- # uname -s 00:08:14.362 08:34:49 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:08:14.362 08:34:49 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.362 ************************************ 00:08:14.362 START TEST unittest_sock 00:08:14.362 ************************************ 00:08:14.362 08:34:49 unittest.unittest_sock -- common/autotest_common.sh@1123 -- # unittest_sock 00:08:14.362 08:34:49 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:14.362 00:08:14.362 00:08:14.362 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.362 http://cunit.sourceforge.net/ 00:08:14.362 00:08:14.362 00:08:14.362 Suite: sock 00:08:14.362 Test: posix_sock ...passed 00:08:14.362 Test: ut_sock ...passed 00:08:14.362 Test: posix_sock_group ...passed 00:08:14.362 Test: ut_sock_group ...passed 00:08:14.362 Test: posix_sock_group_fairness ...passed 00:08:14.362 Test: _posix_sock_close ...passed 00:08:14.362 Test: sock_get_default_opts ...passed 00:08:14.362 Test: ut_sock_impl_get_set_opts ...passed 00:08:14.362 Test: posix_sock_impl_get_set_opts ...passed 00:08:14.362 Test: ut_sock_map ...passed 00:08:14.362 Test: override_impl_opts ...passed 00:08:14.362 Test: ut_sock_group_get_ctx ...passed 00:08:14.362 00:08:14.362 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.362 suites 1 1 n/a 0 0 00:08:14.362 tests 12 12 12 0 0 00:08:14.362 asserts 349 349 349 0 n/a 00:08:14.362 00:08:14.362 Elapsed time = 0.008 seconds 00:08:14.362 08:34:49 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:14.362 00:08:14.362 00:08:14.362 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.362 http://cunit.sourceforge.net/ 00:08:14.362 00:08:14.362 00:08:14.362 Suite: posix 00:08:14.362 Test: flush ...passed 00:08:14.362 00:08:14.362 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.362 suites 1 1 n/a 0 0 00:08:14.362 tests 1 1 1 0 0 00:08:14.362 asserts 28 28 28 0 n/a 00:08:14.362 00:08:14.362 Elapsed time = 0.000 seconds 00:08:14.362 08:34:49 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:14.362 00:08:14.362 real 0m0.114s 00:08:14.362 user 0m0.055s 00:08:14.362 sys 0m0.033s 00:08:14.362 ************************************ 00:08:14.362 08:34:49 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.362 08:34:49 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:08:14.362 END TEST unittest_sock 00:08:14.362 ************************************ 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:14.362 08:34:49 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.362 08:34:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.363 ************************************ 00:08:14.363 START TEST unittest_thread 00:08:14.363 ************************************ 00:08:14.363 08:34:49 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:14.363 00:08:14.363 00:08:14.363 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.363 http://cunit.sourceforge.net/ 00:08:14.363 00:08:14.363 00:08:14.363 Suite: io_channel 00:08:14.363 Test: thread_alloc ...passed 00:08:14.363 Test: thread_send_msg ...passed 00:08:14.363 Test: thread_poller ...passed 00:08:14.363 Test: poller_pause ...passed 00:08:14.363 Test: thread_for_each ...passed 00:08:14.363 Test: for_each_channel_remove ...passed 00:08:14.363 Test: for_each_channel_unreg ...[2024-07-12 08:34:49.530304] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7ffed66b19b0 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:14.363 passed 00:08:14.363 Test: thread_name ...passed 00:08:14.363 Test: channel ...[2024-07-12 08:34:49.534403] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x555a13170180 00:08:14.363 passed 00:08:14.363 Test: channel_destroy_races ...passed 00:08:14.363 Test: thread_exit_test ...[2024-07-12 08:34:49.539551] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:08:14.363 passed 00:08:14.363 Test: thread_update_stats_test ...passed 00:08:14.363 Test: nested_channel ...passed 00:08:14.363 Test: device_unregister_and_thread_exit_race ...passed 00:08:14.363 Test: cache_closest_timed_poller ...passed 00:08:14.363 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:14.363 Test: io_device_lookup ...passed 00:08:14.363 Test: spdk_spin ...[2024-07-12 08:34:49.550511] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:14.363 [2024-07-12 08:34:49.550556] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffed66b19a0 00:08:14.363 [2024-07-12 08:34:49.550641] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:14.621 [2024-07-12 08:34:49.552311] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:14.621 [2024-07-12 08:34:49.552373] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffed66b19a0 00:08:14.621 [2024-07-12 08:34:49.552399] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:14.621 [2024-07-12 08:34:49.552429] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffed66b19a0 00:08:14.621 [2024-07-12 08:34:49.552453] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:14.621 [2024-07-12 08:34:49.552493] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffed66b19a0 00:08:14.621 [2024-07-12 08:34:49.552517] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:14.621 [2024-07-12 08:34:49.552559] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffed66b19a0 00:08:14.621 passed 00:08:14.621 Test: for_each_channel_and_thread_exit_race ...passed 00:08:14.621 Test: for_each_thread_and_thread_exit_race ...passed 00:08:14.621 00:08:14.621 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.621 suites 1 1 n/a 0 0 00:08:14.621 tests 20 20 20 0 0 00:08:14.621 asserts 409 409 409 0 n/a 00:08:14.621 00:08:14.621 Elapsed time = 0.050 seconds 00:08:14.621 00:08:14.621 real 0m0.092s 00:08:14.621 user 0m0.068s 00:08:14.621 sys 0m0.024s 00:08:14.621 08:34:49 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.622 08:34:49 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.622 ************************************ 00:08:14.622 END TEST unittest_thread 00:08:14.622 ************************************ 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:14.622 08:34:49 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.622 ************************************ 00:08:14.622 START TEST unittest_iobuf 00:08:14.622 ************************************ 00:08:14.622 08:34:49 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:14.622 00:08:14.622 00:08:14.622 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.622 http://cunit.sourceforge.net/ 00:08:14.622 00:08:14.622 00:08:14.622 Suite: io_channel 00:08:14.622 Test: iobuf ...passed 00:08:14.622 Test: iobuf_cache ...[2024-07-12 08:34:49.659365] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:14.622 passed 00:08:14.622 00:08:14.622 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.622 suites 1 1 n/a 0 0 00:08:14.622 tests 2 2 2 0 0 00:08:14.622 asserts 107 107 107 0 n/a 00:08:14.622 00:08:14.622 Elapsed time = 0.006 seconds 00:08:14.622 [2024-07-12 08:34:49.659708] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:14.622 [2024-07-12 08:34:49.659855] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:14.622 [2024-07-12 08:34:49.659891] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:14.622 [2024-07-12 08:34:49.659957] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:14.622 [2024-07-12 08:34:49.659991] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:14.622 00:08:14.622 real 0m0.042s 00:08:14.622 user 0m0.028s 00:08:14.622 sys 0m0.014s 00:08:14.622 08:34:49 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.622 08:34:49 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:08:14.622 ************************************ 00:08:14.622 END TEST unittest_iobuf 00:08:14.622 ************************************ 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:14.622 08:34:49 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.622 08:34:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.622 ************************************ 00:08:14.622 START TEST unittest_util 00:08:14.622 ************************************ 00:08:14.622 08:34:49 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:08:14.622 08:34:49 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:14.622 00:08:14.622 00:08:14.622 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.622 http://cunit.sourceforge.net/ 00:08:14.622 00:08:14.622 00:08:14.622 Suite: base64 00:08:14.622 Test: test_base64_get_encoded_strlen ...passed 00:08:14.622 Test: test_base64_get_decoded_len ...passed 00:08:14.622 Test: test_base64_encode ...passed 00:08:14.622 Test: test_base64_decode ...passed 00:08:14.622 Test: test_base64_urlsafe_encode ...passed 00:08:14.622 Test: test_base64_urlsafe_decode ...passed 00:08:14.622 00:08:14.622 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.622 suites 1 1 n/a 0 0 00:08:14.622 tests 6 6 6 0 0 00:08:14.622 asserts 112 112 112 0 n/a 00:08:14.622 00:08:14.622 Elapsed time = 0.000 seconds 00:08:14.622 08:34:49 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:14.622 00:08:14.622 00:08:14.622 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.622 http://cunit.sourceforge.net/ 00:08:14.622 00:08:14.622 00:08:14.622 Suite: bit_array 00:08:14.622 Test: test_1bit ...passed 00:08:14.622 Test: test_64bit ...passed 00:08:14.622 Test: test_find ...passed 00:08:14.622 Test: test_resize ...passed 00:08:14.622 Test: test_errors ...passed 00:08:14.622 Test: test_count ...passed 00:08:14.622 Test: test_mask_store_load ...passed 00:08:14.622 Test: test_mask_clear ...passed 00:08:14.622 00:08:14.622 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.622 suites 1 1 n/a 0 0 00:08:14.622 tests 8 8 8 0 0 00:08:14.622 asserts 5075 5075 5075 0 n/a 00:08:14.622 00:08:14.622 Elapsed time = 0.001 seconds 00:08:14.622 08:34:49 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:14.622 00:08:14.622 00:08:14.622 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.622 http://cunit.sourceforge.net/ 00:08:14.622 00:08:14.622 00:08:14.622 Suite: cpuset 00:08:14.622 Test: test_cpuset ...passed 00:08:14.622 Test: test_cpuset_parse ...[2024-07-12 08:34:49.806966] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:14.622 passed 00:08:14.622 Test: test_cpuset_fmt ...passed 00:08:14.622 Test: test_cpuset_foreach ...passed 00:08:14.622 00:08:14.622 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.622 suites 1 1 n/a 0 0 00:08:14.622 tests 4 4 4 0 0 00:08:14.622 asserts 90 90 90 0 n/a 00:08:14.622 00:08:14.622 Elapsed time = 0.002 seconds 00:08:14.622 [2024-07-12 08:34:49.807258] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:14.622 [2024-07-12 08:34:49.807336] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:14.622 [2024-07-12 08:34:49.807405] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:14.622 [2024-07-12 08:34:49.807429] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:14.622 [2024-07-12 08:34:49.807460] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:14.622 [2024-07-12 08:34:49.807489] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:14.622 [2024-07-12 08:34:49.807532] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:14.881 08:34:49 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:14.881 00:08:14.881 00:08:14.881 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.881 http://cunit.sourceforge.net/ 00:08:14.881 00:08:14.881 00:08:14.881 Suite: crc16 00:08:14.881 Test: test_crc16_t10dif ...passed 00:08:14.881 Test: test_crc16_t10dif_seed ...passed 00:08:14.881 Test: test_crc16_t10dif_copy ...passed 00:08:14.881 00:08:14.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.881 suites 1 1 n/a 0 0 00:08:14.881 tests 3 3 3 0 0 00:08:14.881 asserts 5 5 5 0 n/a 00:08:14.881 00:08:14.881 Elapsed time = 0.000 seconds 00:08:14.881 08:34:49 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:14.881 00:08:14.881 00:08:14.881 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.881 http://cunit.sourceforge.net/ 00:08:14.881 00:08:14.881 00:08:14.881 Suite: crc32_ieee 00:08:14.881 Test: test_crc32_ieee ...passed 00:08:14.881 00:08:14.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.881 suites 1 1 n/a 0 0 00:08:14.881 tests 1 1 1 0 0 00:08:14.881 asserts 1 1 1 0 n/a 00:08:14.881 00:08:14.881 Elapsed time = 0.000 seconds 00:08:14.881 08:34:49 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:14.881 00:08:14.881 00:08:14.881 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.881 http://cunit.sourceforge.net/ 00:08:14.881 00:08:14.881 00:08:14.881 Suite: crc32c 00:08:14.881 Test: test_crc32c ...passed 00:08:14.881 Test: test_crc32c_nvme ...passed 00:08:14.881 00:08:14.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.881 suites 1 1 n/a 0 0 00:08:14.881 tests 2 2 2 0 0 00:08:14.881 asserts 16 16 16 0 n/a 00:08:14.881 00:08:14.881 Elapsed time = 0.000 seconds 00:08:14.881 08:34:49 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:14.881 00:08:14.881 00:08:14.881 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.881 http://cunit.sourceforge.net/ 00:08:14.881 00:08:14.881 00:08:14.881 Suite: crc64 00:08:14.881 Test: test_crc64_nvme ...passed 00:08:14.881 00:08:14.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.881 suites 1 1 n/a 0 0 00:08:14.881 tests 1 1 1 0 0 00:08:14.881 asserts 4 4 4 0 n/a 00:08:14.881 00:08:14.881 Elapsed time = 0.000 seconds 00:08:14.881 08:34:49 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:14.881 00:08:14.881 00:08:14.881 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.881 http://cunit.sourceforge.net/ 00:08:14.881 00:08:14.881 00:08:14.881 Suite: string 00:08:14.881 Test: test_parse_ip_addr ...passed 00:08:14.881 Test: test_str_chomp ...passed 00:08:14.881 Test: test_parse_capacity ...passed 00:08:14.881 Test: test_sprintf_append_realloc ...passed 00:08:14.881 Test: test_strtol ...passed 00:08:14.881 Test: test_strtoll ...passed 00:08:14.881 Test: test_strarray ...passed 00:08:14.881 Test: test_strcpy_replace ...passed 00:08:14.881 00:08:14.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.881 suites 1 1 n/a 0 0 00:08:14.881 tests 8 8 8 0 0 00:08:14.881 asserts 161 161 161 0 n/a 00:08:14.881 00:08:14.882 Elapsed time = 0.001 seconds 00:08:14.882 08:34:49 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:14.882 00:08:14.882 00:08:14.882 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.882 http://cunit.sourceforge.net/ 00:08:14.882 00:08:14.882 00:08:14.882 Suite: dif 00:08:14.882 Test: dif_generate_and_verify_test ...[2024-07-12 08:34:49.983718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:14.882 [2024-07-12 08:34:49.984362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:14.882 [2024-07-12 08:34:49.984648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:14.882 [2024-07-12 08:34:49.984927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:14.882 [2024-07-12 08:34:49.985242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:14.882 passed 00:08:14.882 Test: dif_disable_check_test ...[2024-07-12 08:34:49.985516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:14.882 [2024-07-12 08:34:49.986583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:14.882 [2024-07-12 08:34:49.986898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:14.882 [2024-07-12 08:34:49.987183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:14.882 passed 00:08:14.882 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-12 08:34:49.988311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:14.882 [2024-07-12 08:34:49.988648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:14.882 [2024-07-12 08:34:49.989002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:14.882 [2024-07-12 08:34:49.989390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:14.882 [2024-07-12 08:34:49.989761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:14.882 [2024-07-12 08:34:49.990079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:14.882 [2024-07-12 08:34:49.990419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:14.882 [2024-07-12 08:34:49.990746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:14.882 [2024-07-12 08:34:49.991088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:14.882 [2024-07-12 08:34:49.991458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:14.882 [2024-07-12 08:34:49.991823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:14.882 passed 00:08:14.882 Test: dif_apptag_mask_test ...[2024-07-12 08:34:49.992142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:14.882 passed 00:08:14.882 Test: dif_sec_512_md_0_error_test ...passed 00:08:14.882 Test: dif_sec_4096_md_0_error_test ...passed 00:08:14.882 Test: dif_sec_4100_md_128_error_test ...passed 00:08:14.882 Test: dif_guard_seed_test ...passed 00:08:14.882 Test: dif_guard_value_test ...[2024-07-12 08:34:49.992485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:14.882 [2024-07-12 08:34:49.992734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:14.882 [2024-07-12 08:34:49.992787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:14.882 [2024-07-12 08:34:49.992821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:14.882 [2024-07-12 08:34:49.992868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:14.882 [2024-07-12 08:34:49.992909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:14.882 passed 00:08:14.882 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:14.882 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:14.882 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 08:34:50.037572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:08:14.882 [2024-07-12 08:34:50.040044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fa21, Actual=fe21 00:08:14.882 [2024-07-12 08:34:50.042529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:14.882 [2024-07-12 08:34:50.045031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:14.882 [2024-07-12 08:34:50.047511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:14.882 [2024-07-12 08:34:50.049984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:14.882 [2024-07-12 08:34:50.052467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=c5c7 00:08:14.882 [2024-07-12 08:34:50.053862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=c8c5 00:08:14.882 [2024-07-12 08:34:50.055255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1eb753ed, Actual=1ab753ed 00:08:14.882 [2024-07-12 08:34:50.057751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=3c574660, Actual=38574660 00:08:14.882 [2024-07-12 08:34:50.060241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:14.882 [2024-07-12 08:34:50.062729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:14.882 [2024-07-12 08:34:50.065215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:14.882 [2024-07-12 08:34:50.067668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:14.882 [2024-07-12 08:34:50.070162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=8252e404 00:08:15.143 [2024-07-12 08:34:50.071562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=758783ca 00:08:15.143 [2024-07-12 08:34:50.073017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.143 [2024-07-12 08:34:50.075478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:15.143 [2024-07-12 08:34:50.077954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.143 [2024-07-12 08:34:50.080455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.143 [2024-07-12 08:34:50.082916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.143 [2024-07-12 08:34:50.085402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.143 [2024-07-12 08:34:50.087862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.143 [2024-07-12 08:34:50.089278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=3d6229eb15a54e6a 00:08:15.144 passed 00:08:15.144 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-12 08:34:50.089782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:15.144 [2024-07-12 08:34:50.090122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:15.144 [2024-07-12 08:34:50.090432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.090751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.091096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.091409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.091748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c5c7 00:08:15.144 [2024-07-12 08:34:50.091930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c8c5 00:08:15.144 [2024-07-12 08:34:50.092114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:15.144 [2024-07-12 08:34:50.092412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:15.144 [2024-07-12 08:34:50.092781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.093095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.093418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.144 [2024-07-12 08:34:50.093730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.144 [2024-07-12 08:34:50.094031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8252e404 00:08:15.144 [2024-07-12 08:34:50.094226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=758783ca 00:08:15.144 [2024-07-12 08:34:50.094433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.144 [2024-07-12 08:34:50.094726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:15.144 [2024-07-12 08:34:50.095048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.095351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.095703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.096007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 passed 00:08:15.144 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-12 08:34:50.096363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.144 [2024-07-12 08:34:50.096567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3d6229eb15a54e6a 00:08:15.144 [2024-07-12 08:34:50.096788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:15.144 [2024-07-12 08:34:50.097108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:15.144 [2024-07-12 08:34:50.097400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.097738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.098058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.098386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.098694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c5c7 00:08:15.144 [2024-07-12 08:34:50.098926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c8c5 00:08:15.144 [2024-07-12 08:34:50.099089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:15.144 [2024-07-12 08:34:50.099385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:15.144 [2024-07-12 08:34:50.099695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.100006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.100345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.144 [2024-07-12 08:34:50.100641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.144 [2024-07-12 08:34:50.100928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8252e404 00:08:15.144 [2024-07-12 08:34:50.101090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=758783ca 00:08:15.144 [2024-07-12 08:34:50.101288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.144 [2024-07-12 08:34:50.101599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:15.144 [2024-07-12 08:34:50.101939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.102263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.102624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.102925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.103287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.144 passed 00:08:15.144 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-12 08:34:50.103464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3d6229eb15a54e6a 00:08:15.144 [2024-07-12 08:34:50.103694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:15.144 [2024-07-12 08:34:50.104046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:15.144 [2024-07-12 08:34:50.104396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.104737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.105112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.105436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.144 [2024-07-12 08:34:50.105779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c5c7 00:08:15.144 [2024-07-12 08:34:50.105950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c8c5 00:08:15.144 [2024-07-12 08:34:50.106145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:15.144 [2024-07-12 08:34:50.106473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:15.144 [2024-07-12 08:34:50.106824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.107138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.144 [2024-07-12 08:34:50.107473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.145 [2024-07-12 08:34:50.107801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.145 [2024-07-12 08:34:50.108139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8252e404 00:08:15.145 [2024-07-12 08:34:50.108346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=758783ca 00:08:15.145 [2024-07-12 08:34:50.108566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.145 [2024-07-12 08:34:50.108915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:15.145 [2024-07-12 08:34:50.109229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.109575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.109910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.110245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.110606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.145 [2024-07-12 08:34:50.110805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3d6229eb15a54e6a 00:08:15.145 passed 00:08:15.145 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-12 08:34:50.111067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:15.145 [2024-07-12 08:34:50.111395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:15.145 [2024-07-12 08:34:50.111748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.112085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.112483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.112808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.113147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c5c7 00:08:15.145 passed 00:08:15.145 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-12 08:34:50.113310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c8c5 00:08:15.145 [2024-07-12 08:34:50.113594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:15.145 [2024-07-12 08:34:50.113956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:15.145 [2024-07-12 08:34:50.114306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.114634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.114970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.145 [2024-07-12 08:34:50.115300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.145 [2024-07-12 08:34:50.115645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8252e404 00:08:15.145 [2024-07-12 08:34:50.115841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=758783ca 00:08:15.145 [2024-07-12 08:34:50.116119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.145 [2024-07-12 08:34:50.116471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:15.145 [2024-07-12 08:34:50.116817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.117153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.117473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.117814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.118153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.145 [2024-07-12 08:34:50.118355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3d6229eb15a54e6a 00:08:15.145 passed 00:08:15.145 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-12 08:34:50.118589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:15.145 [2024-07-12 08:34:50.118930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:08:15.145 [2024-07-12 08:34:50.119262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.119603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.119960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.120297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.120633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c5c7 00:08:15.145 passed 00:08:15.145 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-12 08:34:50.120811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c8c5 00:08:15.145 [2024-07-12 08:34:50.121049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:15.145 [2024-07-12 08:34:50.121377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:08:15.145 [2024-07-12 08:34:50.121735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.122072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.122416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.145 [2024-07-12 08:34:50.122760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.145 [2024-07-12 08:34:50.123092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8252e404 00:08:15.145 [2024-07-12 08:34:50.123263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=758783ca 00:08:15.145 [2024-07-12 08:34:50.123519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.145 [2024-07-12 08:34:50.123837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8c010a2d4837a266, Actual=88010a2d4837a266 00:08:15.145 [2024-07-12 08:34:50.124185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.124501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.145 [2024-07-12 08:34:50.124867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.125172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.145 [2024-07-12 08:34:50.125530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.145 [2024-07-12 08:34:50.125729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3d6229eb15a54e6a 00:08:15.145 passed 00:08:15.145 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:15.145 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:15.145 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:15.145 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:15.145 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:15.146 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:15.146 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:15.146 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:15.146 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:15.146 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 08:34:50.170216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:08:15.146 [2024-07-12 08:34:50.171352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=391e, Actual=3d1e 00:08:15.146 [2024-07-12 08:34:50.172503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.173634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.174764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.146 [2024-07-12 08:34:50.175870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.146 [2024-07-12 08:34:50.177013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=c5c7 00:08:15.146 [2024-07-12 08:34:50.178161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=efe9 00:08:15.146 [2024-07-12 08:34:50.179298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1eb753ed, Actual=1ab753ed 00:08:15.146 [2024-07-12 08:34:50.180457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a7f411ee, Actual=a3f411ee 00:08:15.146 [2024-07-12 08:34:50.181594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.182745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.183872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:15.146 [2024-07-12 08:34:50.185026] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:15.146 [2024-07-12 08:34:50.186157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=8252e404 00:08:15.146 [2024-07-12 08:34:50.187285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=4540a484 00:08:15.146 [2024-07-12 08:34:50.188410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.146 [2024-07-12 08:34:50.189569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=c39fa16bb60648f1, Actual=c79fa16bb60648f1 00:08:15.146 [2024-07-12 08:34:50.190691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.191824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.192968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.146 [2024-07-12 08:34:50.194095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.146 [2024-07-12 08:34:50.195212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.146 [2024-07-12 08:34:50.196400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=6d989d3d3474d734 00:08:15.146 passed 00:08:15.146 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 08:34:50.196788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:15.146 [2024-07-12 08:34:50.197080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:08:15.146 [2024-07-12 08:34:50.197372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.197661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.197985] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.146 [2024-07-12 08:34:50.198311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.146 [2024-07-12 08:34:50.198586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c5c7 00:08:15.146 [2024-07-12 08:34:50.198889] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=1bf2 00:08:15.146 [2024-07-12 08:34:50.199178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:15.146 [2024-07-12 08:34:50.199482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=45c2306c, Actual=41c2306c 00:08:15.146 [2024-07-12 08:34:50.199797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.200101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.200402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.146 [2024-07-12 08:34:50.200716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.146 [2024-07-12 08:34:50.200993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8252e404 00:08:15.146 [2024-07-12 08:34:50.201282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a7768506 00:08:15.146 [2024-07-12 08:34:50.201588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.146 [2024-07-12 08:34:50.201856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2302358b893a436b, Actual=2702358b893a436b 00:08:15.146 [2024-07-12 08:34:50.202165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.202448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.202753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.146 [2024-07-12 08:34:50.203021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.146 [2024-07-12 08:34:50.203365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.146 [2024-07-12 08:34:50.203644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=8d0509dd0b48dcae 00:08:15.146 passed 00:08:15.146 Test: dix_sec_512_md_0_error ...passed 00:08:15.146 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-12 08:34:50.203731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:15.146 passed 00:08:15.146 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:15.146 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:15.146 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:15.146 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:15.146 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:15.146 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:15.146 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:15.146 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:15.146 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 08:34:50.241792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f94c, Actual=fd4c 00:08:15.146 [2024-07-12 08:34:50.242695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=391e, Actual=3d1e 00:08:15.146 [2024-07-12 08:34:50.243596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.244500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.146 [2024-07-12 08:34:50.245439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.146 [2024-07-12 08:34:50.246336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.146 [2024-07-12 08:34:50.247208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=c5c7 00:08:15.147 [2024-07-12 08:34:50.248140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=efe9 00:08:15.147 [2024-07-12 08:34:50.249070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1eb753ed, Actual=1ab753ed 00:08:15.147 [2024-07-12 08:34:50.249982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a7f411ee, Actual=a3f411ee 00:08:15.147 [2024-07-12 08:34:50.250878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.251759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.252653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:15.147 [2024-07-12 08:34:50.253603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=40000000000005a 00:08:15.147 [2024-07-12 08:34:50.254487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=8252e404 00:08:15.147 [2024-07-12 08:34:50.255370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=4540a484 00:08:15.147 [2024-07-12 08:34:50.256290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.147 [2024-07-12 08:34:50.257210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=c39fa16bb60648f1, Actual=c79fa16bb60648f1 00:08:15.147 [2024-07-12 08:34:50.258116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.258978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.259902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.147 [2024-07-12 08:34:50.260842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=400005a 00:08:15.147 [2024-07-12 08:34:50.261745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.147 passed 00:08:15.147 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 08:34:50.262649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=6d989d3d3474d734 00:08:15.147 [2024-07-12 08:34:50.262960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:08:15.147 [2024-07-12 08:34:50.263198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:08:15.147 [2024-07-12 08:34:50.263427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.263654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.263904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.147 [2024-07-12 08:34:50.264139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.147 [2024-07-12 08:34:50.264375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=c5c7 00:08:15.147 [2024-07-12 08:34:50.264601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=1bf2 00:08:15.147 [2024-07-12 08:34:50.264831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:08:15.147 [2024-07-12 08:34:50.265053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=45c2306c, Actual=41c2306c 00:08:15.147 [2024-07-12 08:34:50.265295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.265524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.265728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.147 [2024-07-12 08:34:50.265957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000000058 00:08:15.147 [2024-07-12 08:34:50.266179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=8252e404 00:08:15.147 [2024-07-12 08:34:50.266408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=a7768506 00:08:15.147 [2024-07-12 08:34:50.266642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a176a7728ecc20d3, Actual=a576a7728ecc20d3 00:08:15.147 [2024-07-12 08:34:50.266870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2302358b893a436b, Actual=2702358b893a436b 00:08:15.147 [2024-07-12 08:34:50.267089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.267304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:08:15.147 [2024-07-12 08:34:50.267521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.147 [2024-07-12 08:34:50.267757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:08:15.147 [2024-07-12 08:34:50.267997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7b63a8a8ea540960 00:08:15.147 passed 00:08:15.147 Test: set_md_interleave_iovs_test ...[2024-07-12 08:34:50.268225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=8d0509dd0b48dcae 00:08:15.147 passed 00:08:15.147 Test: set_md_interleave_iovs_split_test ...passed 00:08:15.147 Test: dif_generate_stream_pi_16_test ...passed 00:08:15.147 Test: dif_generate_stream_test ...passed 00:08:15.147 Test: set_md_interleave_iovs_alignment_test ...passed 00:08:15.147 Test: dif_generate_split_test ...[2024-07-12 08:34:50.274382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:15.147 passed 00:08:15.147 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:15.147 Test: dif_verify_split_test ...passed 00:08:15.147 Test: dif_verify_stream_multi_segments_test ...passed 00:08:15.147 Test: update_crc32c_pi_16_test ...passed 00:08:15.147 Test: update_crc32c_test ...passed 00:08:15.147 Test: dif_update_crc32c_split_test ...passed 00:08:15.147 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:15.147 Test: get_range_with_md_test ...passed 00:08:15.147 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:15.147 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:15.147 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:15.147 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:15.147 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:15.147 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:15.147 Test: dif_generate_and_verify_unmap_test ...passed 00:08:15.147 00:08:15.147 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.147 suites 1 1 n/a 0 0 00:08:15.147 tests 79 79 79 0 0 00:08:15.147 asserts 3584 3584 3584 0 n/a 00:08:15.147 00:08:15.147 Elapsed time = 0.328 seconds 00:08:15.147 08:34:50 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:15.406 00:08:15.406 00:08:15.406 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.406 http://cunit.sourceforge.net/ 00:08:15.406 00:08:15.406 00:08:15.406 Suite: iov 00:08:15.406 Test: test_single_iov ...passed 00:08:15.406 Test: test_simple_iov ...passed 00:08:15.406 Test: test_complex_iov ...passed 00:08:15.406 Test: test_iovs_to_buf ...passed 00:08:15.406 Test: test_buf_to_iovs ...passed 00:08:15.406 Test: test_memset ...passed 00:08:15.406 Test: test_iov_one ...passed 00:08:15.406 Test: test_iov_xfer ...passed 00:08:15.406 00:08:15.406 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.406 suites 1 1 n/a 0 0 00:08:15.406 tests 8 8 8 0 0 00:08:15.406 asserts 156 156 156 0 n/a 00:08:15.406 00:08:15.406 Elapsed time = 0.000 seconds 00:08:15.406 08:34:50 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:15.406 00:08:15.406 00:08:15.406 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.406 http://cunit.sourceforge.net/ 00:08:15.406 00:08:15.406 00:08:15.406 Suite: math 00:08:15.406 Test: test_serial_number_arithmetic ...passed 00:08:15.406 Suite: erase 00:08:15.406 Test: test_memset_s ...passed 00:08:15.406 00:08:15.406 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.406 suites 2 2 n/a 0 0 00:08:15.406 tests 2 2 2 0 0 00:08:15.406 asserts 18 18 18 0 n/a 00:08:15.406 00:08:15.406 Elapsed time = 0.000 seconds 00:08:15.406 08:34:50 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:15.406 00:08:15.406 00:08:15.406 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.406 http://cunit.sourceforge.net/ 00:08:15.406 00:08:15.406 00:08:15.406 Suite: pipe 00:08:15.406 Test: test_create_destroy ...passed 00:08:15.406 Test: test_write_get_buffer ...passed 00:08:15.406 Test: test_write_advance ...passed 00:08:15.406 Test: test_read_get_buffer ...passed 00:08:15.406 Test: test_read_advance ...passed 00:08:15.406 Test: test_data ...passed 00:08:15.406 00:08:15.406 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.406 suites 1 1 n/a 0 0 00:08:15.406 tests 6 6 6 0 0 00:08:15.406 asserts 251 251 251 0 n/a 00:08:15.406 00:08:15.406 Elapsed time = 0.000 seconds 00:08:15.406 08:34:50 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:15.406 00:08:15.406 00:08:15.406 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.406 http://cunit.sourceforge.net/ 00:08:15.406 00:08:15.406 00:08:15.406 Suite: xor 00:08:15.406 Test: test_xor_gen ...passed 00:08:15.406 00:08:15.406 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.406 suites 1 1 n/a 0 0 00:08:15.406 tests 1 1 1 0 0 00:08:15.406 asserts 17 17 17 0 n/a 00:08:15.406 00:08:15.406 Elapsed time = 0.006 seconds 00:08:15.406 00:08:15.406 real 0m0.727s 00:08:15.406 user 0m0.537s 00:08:15.406 sys 0m0.195s 00:08:15.406 08:34:50 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.406 08:34:50 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 ************************************ 00:08:15.407 END TEST unittest_util 00:08:15.407 ************************************ 00:08:15.407 08:34:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:15.407 08:34:50 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:15.407 08:34:50 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:15.407 08:34:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.407 08:34:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.407 08:34:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 ************************************ 00:08:15.407 START TEST unittest_vhost 00:08:15.407 ************************************ 00:08:15.407 08:34:50 unittest.unittest_vhost -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:15.407 00:08:15.407 00:08:15.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.407 http://cunit.sourceforge.net/ 00:08:15.407 00:08:15.407 00:08:15.407 Suite: vhost_suite 00:08:15.407 Test: desc_to_iov_test ...[2024-07-12 08:34:50.531071] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:15.407 passed 00:08:15.407 Test: create_controller_test ...[2024-07-12 08:34:50.535554] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:15.407 [2024-07-12 08:34:50.535678] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:15.407 [2024-07-12 08:34:50.535816] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:15.407 [2024-07-12 08:34:50.535894] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:15.407 [2024-07-12 08:34:50.535940] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:15.407 [2024-07-12 08:34:50.536289] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:08:15.407 [2024-07-12 08:34:50.537398] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:15.407 passed 00:08:15.407 Test: session_find_by_vid_test ...passed 00:08:15.407 Test: remove_controller_test ...[2024-07-12 08:34:50.539518] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:15.407 passed 00:08:15.407 Test: vq_avail_ring_get_test ...passed 00:08:15.407 Test: vq_packed_ring_test ...passed 00:08:15.407 Test: vhost_blk_construct_test ...passed 00:08:15.407 00:08:15.407 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.407 suites 1 1 n/a 0 0 00:08:15.407 tests 7 7 7 0 0 00:08:15.407 asserts 147 147 147 0 n/a 00:08:15.407 00:08:15.407 Elapsed time = 0.012 seconds 00:08:15.407 00:08:15.407 real 0m0.054s 00:08:15.407 user 0m0.027s 00:08:15.407 sys 0m0.027s 00:08:15.407 08:34:50 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.407 08:34:50 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 ************************************ 00:08:15.407 END TEST unittest_vhost 00:08:15.407 ************************************ 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:15.666 08:34:50 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 START TEST unittest_dma 00:08:15.666 ************************************ 00:08:15.666 08:34:50 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:15.666 00:08:15.666 00:08:15.666 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.666 http://cunit.sourceforge.net/ 00:08:15.666 00:08:15.666 00:08:15.666 Suite: dma_suite 00:08:15.666 Test: test_dma ...[2024-07-12 08:34:50.629701] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:15.666 passed 00:08:15.666 00:08:15.666 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.666 suites 1 1 n/a 0 0 00:08:15.666 tests 1 1 1 0 0 00:08:15.666 asserts 54 54 54 0 n/a 00:08:15.666 00:08:15.666 Elapsed time = 0.000 seconds 00:08:15.666 00:08:15.666 real 0m0.032s 00:08:15.666 user 0m0.017s 00:08:15.666 sys 0m0.016s 00:08:15.666 08:34:50 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.666 08:34:50 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 END TEST unittest_dma 00:08:15.666 ************************************ 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:15.666 08:34:50 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 START TEST unittest_init 00:08:15.666 ************************************ 00:08:15.666 08:34:50 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:08:15.666 08:34:50 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:15.666 00:08:15.666 00:08:15.666 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.666 http://cunit.sourceforge.net/ 00:08:15.666 00:08:15.666 00:08:15.666 Suite: subsystem_suite 00:08:15.666 Test: subsystem_sort_test_depends_on_single ...passed 00:08:15.666 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:15.666 Test: subsystem_sort_test_missing_dependency ...[2024-07-12 08:34:50.714259] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:15.666 passed 00:08:15.666 00:08:15.666 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.666 suites 1 1 n/a 0 0 00:08:15.666 tests 3 3 3 0 0 00:08:15.666 asserts 20 20 20 0 n/a 00:08:15.666 00:08:15.666 Elapsed time = 0.000 seconds 00:08:15.666 [2024-07-12 08:34:50.714560] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:15.666 00:08:15.666 real 0m0.036s 00:08:15.666 user 0m0.024s 00:08:15.666 sys 0m0.013s 00:08:15.666 08:34:50 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.666 08:34:50 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 END TEST unittest_init 00:08:15.666 ************************************ 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:15.666 08:34:50 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 START TEST unittest_keyring 00:08:15.666 ************************************ 00:08:15.666 08:34:50 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:15.666 00:08:15.666 00:08:15.666 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.666 http://cunit.sourceforge.net/ 00:08:15.666 00:08:15.666 00:08:15.666 Suite: keyring 00:08:15.666 Test: test_keyring_add_remove ...[2024-07-12 08:34:50.795431] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:15.666 passed 00:08:15.666 Test: test_keyring_get_put ...passed 00:08:15.666 00:08:15.666 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.666 suites 1 1 n/a 0 0 00:08:15.666 tests 2 2 2 0 0 00:08:15.666 asserts 44 44 44 0 n/a 00:08:15.666 00:08:15.666 Elapsed time = 0.001 seconds 00:08:15.666 [2024-07-12 08:34:50.795734] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:15.666 [2024-07-12 08:34:50.795805] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:15.666 00:08:15.666 real 0m0.035s 00:08:15.666 user 0m0.019s 00:08:15.666 sys 0m0.016s 00:08:15.666 08:34:50 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.666 08:34:50 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:15.666 ************************************ 00:08:15.666 END TEST unittest_keyring 00:08:15.666 ************************************ 00:08:15.666 08:34:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:08:15.666 08:34:50 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:15.666 08:34:50 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:15.666 08:34:50 unittest -- unit/unittest.sh@293 -- # hostname 00:08:15.667 08:34:50 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:15.924 geninfo: WARNING: invalid characters removed from testname! 00:08:47.992 08:35:20 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:50.524 08:35:25 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:53.822 08:35:28 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.122 08:35:32 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:00.403 08:35:35 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:02.934 08:35:38 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:06.213 08:35:40 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:08.737 08:35:43 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:08.737 08:35:43 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:08.996 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:08.996 Found 324 entries. 00:09:08.996 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:08.996 Writing .css and .png files. 00:09:08.996 Generating output. 00:09:08.996 Processing file include/linux/virtio_ring.h 00:09:09.254 Processing file include/spdk/histogram_data.h 00:09:09.254 Processing file include/spdk/util.h 00:09:09.254 Processing file include/spdk/bdev_module.h 00:09:09.254 Processing file include/spdk/base64.h 00:09:09.254 Processing file include/spdk/trace.h 00:09:09.254 Processing file include/spdk/nvme_spec.h 00:09:09.254 Processing file include/spdk/nvmf_transport.h 00:09:09.254 Processing file include/spdk/thread.h 00:09:09.254 Processing file include/spdk/endian.h 00:09:09.254 Processing file include/spdk/nvme.h 00:09:09.254 Processing file include/spdk/mmio.h 00:09:09.512 Processing file include/spdk_internal/utf.h 00:09:09.512 Processing file include/spdk_internal/rdma_utils.h 00:09:09.512 Processing file include/spdk_internal/sock.h 00:09:09.512 Processing file include/spdk_internal/sgl.h 00:09:09.512 Processing file include/spdk_internal/nvme_tcp.h 00:09:09.512 Processing file include/spdk_internal/virtio.h 00:09:09.770 Processing file lib/accel/accel_sw.c 00:09:09.770 Processing file lib/accel/accel.c 00:09:09.770 Processing file lib/accel/accel_rpc.c 00:09:10.028 Processing file lib/bdev/bdev_zone.c 00:09:10.028 Processing file lib/bdev/scsi_nvme.c 00:09:10.028 Processing file lib/bdev/bdev.c 00:09:10.028 Processing file lib/bdev/part.c 00:09:10.028 Processing file lib/bdev/bdev_rpc.c 00:09:10.287 Processing file lib/blob/blobstore.h 00:09:10.287 Processing file lib/blob/request.c 00:09:10.287 Processing file lib/blob/blob_bs_dev.c 00:09:10.287 Processing file lib/blob/blobstore.c 00:09:10.287 Processing file lib/blob/zeroes.c 00:09:10.287 Processing file lib/blobfs/tree.c 00:09:10.287 Processing file lib/blobfs/blobfs.c 00:09:10.545 Processing file lib/conf/conf.c 00:09:10.545 Processing file lib/dma/dma.c 00:09:10.803 Processing file lib/env_dpdk/pci.c 00:09:10.804 Processing file lib/env_dpdk/pci_vmd.c 00:09:10.804 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:10.804 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:10.804 Processing file lib/env_dpdk/pci_event.c 00:09:10.804 Processing file lib/env_dpdk/pci_ioat.c 00:09:10.804 Processing file lib/env_dpdk/memory.c 00:09:10.804 Processing file lib/env_dpdk/pci_virtio.c 00:09:10.804 Processing file lib/env_dpdk/init.c 00:09:10.804 Processing file lib/env_dpdk/pci_dpdk.c 00:09:10.804 Processing file lib/env_dpdk/sigbus_handler.c 00:09:10.804 Processing file lib/env_dpdk/env.c 00:09:10.804 Processing file lib/env_dpdk/pci_idxd.c 00:09:10.804 Processing file lib/env_dpdk/threads.c 00:09:11.061 Processing file lib/event/app_rpc.c 00:09:11.061 Processing file lib/event/reactor.c 00:09:11.061 Processing file lib/event/scheduler_static.c 00:09:11.061 Processing file lib/event/app.c 00:09:11.061 Processing file lib/event/log_rpc.c 00:09:11.628 Processing file lib/ftl/ftl_io.c 00:09:11.628 Processing file lib/ftl/ftl_band_ops.c 00:09:11.628 Processing file lib/ftl/ftl_reloc.c 00:09:11.628 Processing file lib/ftl/ftl_band.h 00:09:11.628 Processing file lib/ftl/ftl_init.c 00:09:11.628 Processing file lib/ftl/ftl_core.c 00:09:11.628 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:11.628 Processing file lib/ftl/ftl_l2p_flat.c 00:09:11.628 Processing file lib/ftl/ftl_nv_cache.c 00:09:11.628 Processing file lib/ftl/ftl_l2p.c 00:09:11.628 Processing file lib/ftl/ftl_core.h 00:09:11.628 Processing file lib/ftl/ftl_debug.h 00:09:11.628 Processing file lib/ftl/ftl_p2l.c 00:09:11.628 Processing file lib/ftl/ftl_trace.c 00:09:11.628 Processing file lib/ftl/ftl_sb.c 00:09:11.628 Processing file lib/ftl/ftl_layout.c 00:09:11.628 Processing file lib/ftl/ftl_io.h 00:09:11.628 Processing file lib/ftl/ftl_debug.c 00:09:11.628 Processing file lib/ftl/ftl_band.c 00:09:11.628 Processing file lib/ftl/ftl_writer.c 00:09:11.628 Processing file lib/ftl/ftl_writer.h 00:09:11.628 Processing file lib/ftl/ftl_rq.c 00:09:11.628 Processing file lib/ftl/ftl_l2p_cache.c 00:09:11.628 Processing file lib/ftl/ftl_nv_cache.h 00:09:11.628 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:11.628 Processing file lib/ftl/base/ftl_base_dev.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:11.886 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:11.886 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:11.886 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:09:12.143 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:12.401 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:12.401 Processing file lib/ftl/utils/ftl_property.h 00:09:12.402 Processing file lib/ftl/utils/ftl_mempool.c 00:09:12.402 Processing file lib/ftl/utils/ftl_property.c 00:09:12.402 Processing file lib/ftl/utils/ftl_md.c 00:09:12.402 Processing file lib/ftl/utils/ftl_conf.c 00:09:12.402 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:12.402 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:12.402 Processing file lib/ftl/utils/ftl_df.h 00:09:12.402 Processing file lib/idxd/idxd.c 00:09:12.402 Processing file lib/idxd/idxd_user.c 00:09:12.402 Processing file lib/idxd/idxd_internal.h 00:09:12.660 Processing file lib/init/subsystem_rpc.c 00:09:12.660 Processing file lib/init/subsystem.c 00:09:12.660 Processing file lib/init/rpc.c 00:09:12.661 Processing file lib/init/json_config.c 00:09:12.661 Processing file lib/ioat/ioat.c 00:09:12.661 Processing file lib/ioat/ioat_internal.h 00:09:13.228 Processing file lib/iscsi/conn.c 00:09:13.228 Processing file lib/iscsi/md5.c 00:09:13.228 Processing file lib/iscsi/param.c 00:09:13.228 Processing file lib/iscsi/portal_grp.c 00:09:13.228 Processing file lib/iscsi/task.c 00:09:13.228 Processing file lib/iscsi/iscsi_rpc.c 00:09:13.228 Processing file lib/iscsi/tgt_node.c 00:09:13.228 Processing file lib/iscsi/iscsi_subsystem.c 00:09:13.228 Processing file lib/iscsi/task.h 00:09:13.228 Processing file lib/iscsi/init_grp.c 00:09:13.228 Processing file lib/iscsi/iscsi.h 00:09:13.228 Processing file lib/iscsi/iscsi.c 00:09:13.228 Processing file lib/json/json_write.c 00:09:13.228 Processing file lib/json/json_util.c 00:09:13.228 Processing file lib/json/json_parse.c 00:09:13.486 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:13.486 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:13.486 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:13.486 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:13.486 Processing file lib/keyring/keyring_rpc.c 00:09:13.486 Processing file lib/keyring/keyring.c 00:09:13.486 Processing file lib/log/log.c 00:09:13.486 Processing file lib/log/log_flags.c 00:09:13.486 Processing file lib/log/log_deprecated.c 00:09:13.745 Processing file lib/lvol/lvol.c 00:09:13.745 Processing file lib/nbd/nbd.c 00:09:13.745 Processing file lib/nbd/nbd_rpc.c 00:09:13.745 Processing file lib/notify/notify.c 00:09:13.745 Processing file lib/notify/notify_rpc.c 00:09:14.679 Processing file lib/nvme/nvme_io_msg.c 00:09:14.679 Processing file lib/nvme/nvme_qpair.c 00:09:14.679 Processing file lib/nvme/nvme_tcp.c 00:09:14.679 Processing file lib/nvme/nvme_pcie_internal.h 00:09:14.679 Processing file lib/nvme/nvme_cuse.c 00:09:14.679 Processing file lib/nvme/nvme_zns.c 00:09:14.679 Processing file lib/nvme/nvme_opal.c 00:09:14.679 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:14.679 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:14.679 Processing file lib/nvme/nvme_auth.c 00:09:14.679 Processing file lib/nvme/nvme_rdma.c 00:09:14.679 Processing file lib/nvme/nvme_poll_group.c 00:09:14.679 Processing file lib/nvme/nvme_fabric.c 00:09:14.679 Processing file lib/nvme/nvme_ctrlr.c 00:09:14.679 Processing file lib/nvme/nvme_pcie.c 00:09:14.679 Processing file lib/nvme/nvme_stubs.c 00:09:14.679 Processing file lib/nvme/nvme_ns.c 00:09:14.679 Processing file lib/nvme/nvme_discovery.c 00:09:14.679 Processing file lib/nvme/nvme.c 00:09:14.679 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:14.679 Processing file lib/nvme/nvme_quirks.c 00:09:14.679 Processing file lib/nvme/nvme_internal.h 00:09:14.679 Processing file lib/nvme/nvme_pcie_common.c 00:09:14.679 Processing file lib/nvme/nvme_ns_cmd.c 00:09:14.679 Processing file lib/nvme/nvme_transport.c 00:09:15.244 Processing file lib/nvmf/nvmf_rpc.c 00:09:15.244 Processing file lib/nvmf/ctrlr_discovery.c 00:09:15.245 Processing file lib/nvmf/ctrlr.c 00:09:15.245 Processing file lib/nvmf/nvmf.c 00:09:15.245 Processing file lib/nvmf/stubs.c 00:09:15.245 Processing file lib/nvmf/transport.c 00:09:15.245 Processing file lib/nvmf/nvmf_internal.h 00:09:15.245 Processing file lib/nvmf/auth.c 00:09:15.245 Processing file lib/nvmf/subsystem.c 00:09:15.245 Processing file lib/nvmf/ctrlr_bdev.c 00:09:15.245 Processing file lib/nvmf/rdma.c 00:09:15.245 Processing file lib/nvmf/tcp.c 00:09:15.245 Processing file lib/rdma_provider/common.c 00:09:15.245 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:09:15.245 Processing file lib/rdma_utils/rdma_utils.c 00:09:15.502 Processing file lib/rpc/rpc.c 00:09:15.502 Processing file lib/scsi/port.c 00:09:15.502 Processing file lib/scsi/lun.c 00:09:15.502 Processing file lib/scsi/scsi_rpc.c 00:09:15.502 Processing file lib/scsi/task.c 00:09:15.502 Processing file lib/scsi/dev.c 00:09:15.502 Processing file lib/scsi/scsi_bdev.c 00:09:15.502 Processing file lib/scsi/scsi.c 00:09:15.502 Processing file lib/scsi/scsi_pr.c 00:09:15.812 Processing file lib/sock/sock.c 00:09:15.812 Processing file lib/sock/sock_rpc.c 00:09:15.812 Processing file lib/thread/iobuf.c 00:09:15.812 Processing file lib/thread/thread.c 00:09:16.109 Processing file lib/trace/trace_flags.c 00:09:16.109 Processing file lib/trace/trace.c 00:09:16.109 Processing file lib/trace/trace_rpc.c 00:09:16.109 Processing file lib/trace_parser/trace.cpp 00:09:16.109 Processing file lib/ut/ut.c 00:09:16.109 Processing file lib/ut_mock/mock.c 00:09:16.676 Processing file lib/util/iov.c 00:09:16.676 Processing file lib/util/crc32.c 00:09:16.676 Processing file lib/util/math.c 00:09:16.676 Processing file lib/util/dif.c 00:09:16.676 Processing file lib/util/base64.c 00:09:16.676 Processing file lib/util/xor.c 00:09:16.676 Processing file lib/util/crc32c.c 00:09:16.676 Processing file lib/util/cpuset.c 00:09:16.676 Processing file lib/util/string.c 00:09:16.676 Processing file lib/util/zipf.c 00:09:16.676 Processing file lib/util/uuid.c 00:09:16.676 Processing file lib/util/fd.c 00:09:16.676 Processing file lib/util/crc16.c 00:09:16.676 Processing file lib/util/strerror_tls.c 00:09:16.676 Processing file lib/util/file.c 00:09:16.676 Processing file lib/util/bit_array.c 00:09:16.676 Processing file lib/util/crc32_ieee.c 00:09:16.676 Processing file lib/util/hexlify.c 00:09:16.676 Processing file lib/util/pipe.c 00:09:16.676 Processing file lib/util/fd_group.c 00:09:16.676 Processing file lib/util/crc64.c 00:09:16.676 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:16.676 Processing file lib/vfio_user/host/vfio_user.c 00:09:16.934 Processing file lib/vhost/vhost_scsi.c 00:09:16.934 Processing file lib/vhost/rte_vhost_user.c 00:09:16.934 Processing file lib/vhost/vhost_blk.c 00:09:16.934 Processing file lib/vhost/vhost_internal.h 00:09:16.934 Processing file lib/vhost/vhost.c 00:09:16.934 Processing file lib/vhost/vhost_rpc.c 00:09:16.934 Processing file lib/virtio/virtio_pci.c 00:09:16.934 Processing file lib/virtio/virtio_vhost_user.c 00:09:16.934 Processing file lib/virtio/virtio_vfio_user.c 00:09:16.934 Processing file lib/virtio/virtio.c 00:09:17.192 Processing file lib/vmd/led.c 00:09:17.192 Processing file lib/vmd/vmd.c 00:09:17.192 Processing file module/accel/dsa/accel_dsa.c 00:09:17.192 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:17.192 Processing file module/accel/error/accel_error_rpc.c 00:09:17.192 Processing file module/accel/error/accel_error.c 00:09:17.450 Processing file module/accel/iaa/accel_iaa.c 00:09:17.450 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:17.450 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:17.450 Processing file module/accel/ioat/accel_ioat.c 00:09:17.450 Processing file module/bdev/aio/bdev_aio.c 00:09:17.450 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:17.708 Processing file module/bdev/delay/vbdev_delay.c 00:09:17.708 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:17.708 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:17.708 Processing file module/bdev/error/vbdev_error.c 00:09:17.708 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:17.708 Processing file module/bdev/ftl/bdev_ftl.c 00:09:17.964 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:17.964 Processing file module/bdev/gpt/gpt.c 00:09:17.964 Processing file module/bdev/gpt/gpt.h 00:09:17.964 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:17.964 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:18.222 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:18.222 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:18.222 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:18.222 Processing file module/bdev/malloc/bdev_malloc.c 00:09:18.480 Processing file module/bdev/null/bdev_null_rpc.c 00:09:18.480 Processing file module/bdev/null/bdev_null.c 00:09:18.738 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:18.738 Processing file module/bdev/nvme/nvme_rpc.c 00:09:18.738 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:18.738 Processing file module/bdev/nvme/bdev_nvme.c 00:09:18.738 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:18.738 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:18.738 Processing file module/bdev/nvme/vbdev_opal.c 00:09:18.738 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:18.738 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:18.997 Processing file module/bdev/raid/raid0.c 00:09:18.997 Processing file module/bdev/raid/concat.c 00:09:18.997 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:18.997 Processing file module/bdev/raid/raid1.c 00:09:18.997 Processing file module/bdev/raid/bdev_raid.h 00:09:18.997 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:18.997 Processing file module/bdev/raid/bdev_raid.c 00:09:18.997 Processing file module/bdev/raid/raid5f.c 00:09:18.997 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:18.997 Processing file module/bdev/split/vbdev_split.c 00:09:19.255 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:19.255 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:19.255 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:19.255 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:19.255 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:19.514 Processing file module/blob/bdev/blob_bdev.c 00:09:19.514 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:19.514 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:19.514 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:19.772 Processing file module/event/subsystems/accel/accel.c 00:09:19.772 Processing file module/event/subsystems/bdev/bdev.c 00:09:19.772 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:19.772 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:19.772 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:19.772 Processing file module/event/subsystems/keyring/keyring.c 00:09:20.031 Processing file module/event/subsystems/nbd/nbd.c 00:09:20.031 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:20.031 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:20.031 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:20.031 Processing file module/event/subsystems/scsi/scsi.c 00:09:20.290 Processing file module/event/subsystems/sock/sock.c 00:09:20.290 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:20.290 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:20.290 Processing file module/event/subsystems/vmd/vmd.c 00:09:20.290 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:20.548 Processing file module/keyring/file/keyring_rpc.c 00:09:20.548 Processing file module/keyring/file/keyring.c 00:09:20.548 Processing file module/keyring/linux/keyring_rpc.c 00:09:20.548 Processing file module/keyring/linux/keyring.c 00:09:20.548 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:20.807 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:20.807 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:20.807 Processing file module/sock/sock_kernel.h 00:09:21.065 Processing file module/sock/posix/posix.c 00:09:21.065 Writing directory view page. 00:09:21.065 Overall coverage rate: 00:09:21.065 lines......: 38.9% (40912 of 105107 lines) 00:09:21.065 functions..: 42.4% (3727 of 8788 functions) 00:09:21.065 00:09:21.065 00:09:21.065 ===================== 00:09:21.065 All unit tests passed 00:09:21.065 ===================== 00:09:21.065 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:21.065 08:35:56 unittest -- unit/unittest.sh@305 -- # set +x 00:09:21.065 00:09:21.065 00:09:21.065 00:09:21.065 real 3m54.232s 00:09:21.065 user 3m23.648s 00:09:21.065 sys 0m18.145s 00:09:21.065 08:35:56 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.065 08:35:56 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.065 ************************************ 00:09:21.065 END TEST unittest 00:09:21.065 ************************************ 00:09:21.065 08:35:56 -- common/autotest_common.sh@1142 -- # return 0 00:09:21.065 08:35:56 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:21.065 08:35:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:21.065 08:35:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:21.065 08:35:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:21.066 08:35:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.066 08:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:21.066 08:35:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:21.066 08:35:56 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:21.066 08:35:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.066 08:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.066 08:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:21.066 ************************************ 00:09:21.066 START TEST env 00:09:21.066 ************************************ 00:09:21.066 08:35:56 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:21.066 * Looking for test storage... 00:09:21.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:21.066 08:35:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:21.066 08:35:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.066 08:35:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.066 08:35:56 env -- common/autotest_common.sh@10 -- # set +x 00:09:21.066 ************************************ 00:09:21.066 START TEST env_memory 00:09:21.066 ************************************ 00:09:21.066 08:35:56 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:21.066 00:09:21.066 00:09:21.066 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.066 http://cunit.sourceforge.net/ 00:09:21.066 00:09:21.066 00:09:21.066 Suite: memory 00:09:21.066 Test: alloc and free memory map ...[2024-07-12 08:35:56.241174] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:21.325 passed 00:09:21.325 Test: mem map translation ...[2024-07-12 08:35:56.288330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:21.325 [2024-07-12 08:35:56.288588] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:21.325 [2024-07-12 08:35:56.288866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:21.325 [2024-07-12 08:35:56.289077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:21.325 passed 00:09:21.325 Test: mem map registration ...[2024-07-12 08:35:56.374444] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:21.325 [2024-07-12 08:35:56.374706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:21.325 passed 00:09:21.325 Test: mem map adjacent registrations ...passed 00:09:21.325 00:09:21.325 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.325 suites 1 1 n/a 0 0 00:09:21.325 tests 4 4 4 0 0 00:09:21.325 asserts 152 152 152 0 n/a 00:09:21.325 00:09:21.325 Elapsed time = 0.289 seconds 00:09:21.325 00:09:21.325 real 0m0.327s 00:09:21.325 user 0m0.297s 00:09:21.325 sys 0m0.028s 00:09:21.325 08:35:56 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.325 ************************************ 00:09:21.325 END TEST env_memory 00:09:21.325 08:35:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:21.325 ************************************ 00:09:21.583 08:35:56 env -- common/autotest_common.sh@1142 -- # return 0 00:09:21.583 08:35:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:21.583 08:35:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.583 08:35:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.583 08:35:56 env -- common/autotest_common.sh@10 -- # set +x 00:09:21.583 ************************************ 00:09:21.583 START TEST env_vtophys 00:09:21.583 ************************************ 00:09:21.583 08:35:56 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:21.583 EAL: lib.eal log level changed from notice to debug 00:09:21.583 EAL: Detected lcore 0 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 1 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 2 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 3 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 4 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 5 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 6 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 7 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 8 as core 0 on socket 0 00:09:21.583 EAL: Detected lcore 9 as core 0 on socket 0 00:09:21.583 EAL: Maximum logical cores by configuration: 128 00:09:21.584 EAL: Detected CPU lcores: 10 00:09:21.584 EAL: Detected NUMA nodes: 1 00:09:21.584 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:21.584 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:21.584 EAL: Checking presence of .so 'librte_eal.so' 00:09:21.584 EAL: Detected static linkage of DPDK 00:09:21.584 EAL: No shared files mode enabled, IPC will be disabled 00:09:21.584 EAL: Selected IOVA mode 'PA' 00:09:21.584 EAL: Probing VFIO support... 00:09:21.584 EAL: IOMMU type 1 (Type 1) is supported 00:09:21.584 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:21.584 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:21.584 EAL: VFIO support initialized 00:09:21.584 EAL: Ask a virtual area of 0x2e000 bytes 00:09:21.584 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:21.584 EAL: Setting up physically contiguous memory... 00:09:21.584 EAL: Setting maximum number of open files to 1048576 00:09:21.584 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:21.584 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:21.584 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.584 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:21.584 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.584 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.584 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:21.584 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:21.584 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.584 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:21.584 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.584 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.584 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:21.584 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:21.584 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.584 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:21.584 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.584 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.584 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:21.584 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:21.584 EAL: Ask a virtual area of 0x61000 bytes 00:09:21.584 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:21.584 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:21.584 EAL: Ask a virtual area of 0x400000000 bytes 00:09:21.584 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:21.584 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:21.584 EAL: Hugepages will be freed exactly as allocated. 00:09:21.584 EAL: No shared files mode enabled, IPC is disabled 00:09:21.584 EAL: No shared files mode enabled, IPC is disabled 00:09:21.584 EAL: TSC frequency is ~2200000 KHz 00:09:21.584 EAL: Main lcore 0 is ready (tid=7ff3a5f69a40;cpuset=[0]) 00:09:21.584 EAL: Trying to obtain current memory policy. 00:09:21.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.584 EAL: Restoring previous memory policy: 0 00:09:21.584 EAL: request: mp_malloc_sync 00:09:21.584 EAL: No shared files mode enabled, IPC is disabled 00:09:21.584 EAL: Heap on socket 0 was expanded by 2MB 00:09:21.584 EAL: No shared files mode enabled, IPC is disabled 00:09:21.584 EAL: Mem event callback 'spdk:(nil)' registered 00:09:21.842 00:09:21.842 00:09:21.842 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.842 http://cunit.sourceforge.net/ 00:09:21.842 00:09:21.842 00:09:21.842 Suite: components_suite 00:09:22.101 Test: vtophys_malloc_test ...passed 00:09:22.101 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:22.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.101 EAL: Restoring previous memory policy: 0 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was expanded by 4MB 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was shrunk by 4MB 00:09:22.101 EAL: Trying to obtain current memory policy. 00:09:22.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.101 EAL: Restoring previous memory policy: 0 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was expanded by 6MB 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was shrunk by 6MB 00:09:22.101 EAL: Trying to obtain current memory policy. 00:09:22.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.101 EAL: Restoring previous memory policy: 0 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was expanded by 10MB 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was shrunk by 10MB 00:09:22.101 EAL: Trying to obtain current memory policy. 00:09:22.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.101 EAL: Restoring previous memory policy: 0 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was expanded by 18MB 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was shrunk by 18MB 00:09:22.101 EAL: Trying to obtain current memory policy. 00:09:22.101 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.101 EAL: Restoring previous memory policy: 0 00:09:22.101 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.101 EAL: request: mp_malloc_sync 00:09:22.101 EAL: No shared files mode enabled, IPC is disabled 00:09:22.101 EAL: Heap on socket 0 was expanded by 34MB 00:09:22.360 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.360 EAL: request: mp_malloc_sync 00:09:22.360 EAL: No shared files mode enabled, IPC is disabled 00:09:22.360 EAL: Heap on socket 0 was shrunk by 34MB 00:09:22.360 EAL: Trying to obtain current memory policy. 00:09:22.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.360 EAL: Restoring previous memory policy: 0 00:09:22.360 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.360 EAL: request: mp_malloc_sync 00:09:22.360 EAL: No shared files mode enabled, IPC is disabled 00:09:22.360 EAL: Heap on socket 0 was expanded by 66MB 00:09:22.360 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.360 EAL: request: mp_malloc_sync 00:09:22.360 EAL: No shared files mode enabled, IPC is disabled 00:09:22.360 EAL: Heap on socket 0 was shrunk by 66MB 00:09:22.618 EAL: Trying to obtain current memory policy. 00:09:22.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.618 EAL: Restoring previous memory policy: 0 00:09:22.618 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.618 EAL: request: mp_malloc_sync 00:09:22.618 EAL: No shared files mode enabled, IPC is disabled 00:09:22.618 EAL: Heap on socket 0 was expanded by 130MB 00:09:22.618 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.618 EAL: request: mp_malloc_sync 00:09:22.618 EAL: No shared files mode enabled, IPC is disabled 00:09:22.618 EAL: Heap on socket 0 was shrunk by 130MB 00:09:22.876 EAL: Trying to obtain current memory policy. 00:09:22.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.876 EAL: Restoring previous memory policy: 0 00:09:22.876 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.876 EAL: request: mp_malloc_sync 00:09:22.876 EAL: No shared files mode enabled, IPC is disabled 00:09:22.876 EAL: Heap on socket 0 was expanded by 258MB 00:09:23.443 EAL: Calling mem event callback 'spdk:(nil)' 00:09:23.443 EAL: request: mp_malloc_sync 00:09:23.443 EAL: No shared files mode enabled, IPC is disabled 00:09:23.443 EAL: Heap on socket 0 was shrunk by 258MB 00:09:23.702 EAL: Trying to obtain current memory policy. 00:09:23.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:23.702 EAL: Restoring previous memory policy: 0 00:09:23.702 EAL: Calling mem event callback 'spdk:(nil)' 00:09:23.702 EAL: request: mp_malloc_sync 00:09:23.702 EAL: No shared files mode enabled, IPC is disabled 00:09:23.702 EAL: Heap on socket 0 was expanded by 514MB 00:09:24.638 EAL: Calling mem event callback 'spdk:(nil)' 00:09:24.638 EAL: request: mp_malloc_sync 00:09:24.638 EAL: No shared files mode enabled, IPC is disabled 00:09:24.638 EAL: Heap on socket 0 was shrunk by 514MB 00:09:25.224 EAL: Trying to obtain current memory policy. 00:09:25.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:25.482 EAL: Restoring previous memory policy: 0 00:09:25.482 EAL: Calling mem event callback 'spdk:(nil)' 00:09:25.482 EAL: request: mp_malloc_sync 00:09:25.482 EAL: No shared files mode enabled, IPC is disabled 00:09:25.482 EAL: Heap on socket 0 was expanded by 1026MB 00:09:27.381 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.381 EAL: request: mp_malloc_sync 00:09:27.381 EAL: No shared files mode enabled, IPC is disabled 00:09:27.381 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:28.758 passed 00:09:28.758 00:09:28.758 Run Summary: Type Total Ran Passed Failed Inactive 00:09:28.758 suites 1 1 n/a 0 0 00:09:28.758 tests 2 2 2 0 0 00:09:28.758 asserts 6496 6496 6496 0 n/a 00:09:28.758 00:09:28.758 Elapsed time = 6.882 seconds 00:09:28.758 EAL: Calling mem event callback 'spdk:(nil)' 00:09:28.758 EAL: request: mp_malloc_sync 00:09:28.758 EAL: No shared files mode enabled, IPC is disabled 00:09:28.758 EAL: Heap on socket 0 was shrunk by 2MB 00:09:28.758 EAL: No shared files mode enabled, IPC is disabled 00:09:28.758 EAL: No shared files mode enabled, IPC is disabled 00:09:28.758 EAL: No shared files mode enabled, IPC is disabled 00:09:28.758 ************************************ 00:09:28.758 END TEST env_vtophys 00:09:28.758 ************************************ 00:09:28.758 00:09:28.758 real 0m7.201s 00:09:28.758 user 0m6.108s 00:09:28.758 sys 0m0.946s 00:09:28.758 08:36:03 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.758 08:36:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:28.758 08:36:03 env -- common/autotest_common.sh@1142 -- # return 0 00:09:28.758 08:36:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:28.758 08:36:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:28.758 08:36:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.758 08:36:03 env -- common/autotest_common.sh@10 -- # set +x 00:09:28.758 ************************************ 00:09:28.758 START TEST env_pci 00:09:28.758 ************************************ 00:09:28.758 08:36:03 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:28.758 00:09:28.758 00:09:28.758 CUnit - A unit testing framework for C - Version 2.1-3 00:09:28.758 http://cunit.sourceforge.net/ 00:09:28.758 00:09:28.758 00:09:28.758 Suite: pci 00:09:28.758 Test: pci_hook ...[2024-07-12 08:36:03.851653] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 111011 has claimed it 00:09:28.758 EAL: Cannot find device (10000:00:01.0) 00:09:28.758 EAL: Failed to attach device on primary process 00:09:28.758 passed 00:09:28.758 00:09:28.758 Run Summary: Type Total Ran Passed Failed Inactive 00:09:28.758 suites 1 1 n/a 0 0 00:09:28.758 tests 1 1 1 0 0 00:09:28.758 asserts 25 25 25 0 n/a 00:09:28.758 00:09:28.758 Elapsed time = 0.006 seconds 00:09:28.758 ************************************ 00:09:28.758 END TEST env_pci 00:09:28.758 ************************************ 00:09:28.758 00:09:28.758 real 0m0.084s 00:09:28.758 user 0m0.061s 00:09:28.758 sys 0m0.023s 00:09:28.758 08:36:03 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.758 08:36:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:28.758 08:36:03 env -- common/autotest_common.sh@1142 -- # return 0 00:09:28.758 08:36:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:28.758 08:36:03 env -- env/env.sh@15 -- # uname 00:09:28.758 08:36:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:28.758 08:36:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:28.758 08:36:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:28.758 08:36:03 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:28.758 08:36:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.758 08:36:03 env -- common/autotest_common.sh@10 -- # set +x 00:09:28.758 ************************************ 00:09:29.016 START TEST env_dpdk_post_init 00:09:29.016 ************************************ 00:09:29.016 08:36:03 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:29.016 EAL: Detected CPU lcores: 10 00:09:29.016 EAL: Detected NUMA nodes: 1 00:09:29.016 EAL: Detected static linkage of DPDK 00:09:29.016 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:29.016 EAL: Selected IOVA mode 'PA' 00:09:29.016 EAL: VFIO support initialized 00:09:29.016 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:29.016 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:29.274 Starting DPDK initialization... 00:09:29.274 Starting SPDK post initialization... 00:09:29.274 SPDK NVMe probe 00:09:29.274 Attaching to 0000:00:10.0 00:09:29.274 Attached to 0000:00:10.0 00:09:29.274 Cleaning up... 00:09:29.274 00:09:29.274 real 0m0.276s 00:09:29.274 user 0m0.083s 00:09:29.274 sys 0m0.093s 00:09:29.274 08:36:04 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.274 08:36:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:29.274 ************************************ 00:09:29.274 END TEST env_dpdk_post_init 00:09:29.274 ************************************ 00:09:29.274 08:36:04 env -- common/autotest_common.sh@1142 -- # return 0 00:09:29.274 08:36:04 env -- env/env.sh@26 -- # uname 00:09:29.274 08:36:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:29.274 08:36:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:29.274 08:36:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:29.274 08:36:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.274 08:36:04 env -- common/autotest_common.sh@10 -- # set +x 00:09:29.274 ************************************ 00:09:29.274 START TEST env_mem_callbacks 00:09:29.274 ************************************ 00:09:29.274 08:36:04 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:29.274 EAL: Detected CPU lcores: 10 00:09:29.274 EAL: Detected NUMA nodes: 1 00:09:29.274 EAL: Detected static linkage of DPDK 00:09:29.274 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:29.274 EAL: Selected IOVA mode 'PA' 00:09:29.274 EAL: VFIO support initialized 00:09:29.532 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:29.532 00:09:29.532 00:09:29.532 CUnit - A unit testing framework for C - Version 2.1-3 00:09:29.532 http://cunit.sourceforge.net/ 00:09:29.532 00:09:29.532 00:09:29.532 Suite: memory 00:09:29.532 Test: test ... 00:09:29.532 register 0x200000200000 2097152 00:09:29.532 malloc 3145728 00:09:29.532 register 0x200000400000 4194304 00:09:29.532 buf 0x2000004fffc0 len 3145728 PASSED 00:09:29.532 malloc 64 00:09:29.532 buf 0x2000004ffec0 len 64 PASSED 00:09:29.532 malloc 4194304 00:09:29.532 register 0x200000800000 6291456 00:09:29.532 buf 0x2000009fffc0 len 4194304 PASSED 00:09:29.532 free 0x2000004fffc0 3145728 00:09:29.532 free 0x2000004ffec0 64 00:09:29.532 unregister 0x200000400000 4194304 PASSED 00:09:29.532 free 0x2000009fffc0 4194304 00:09:29.532 unregister 0x200000800000 6291456 PASSED 00:09:29.532 malloc 8388608 00:09:29.532 register 0x200000400000 10485760 00:09:29.532 buf 0x2000005fffc0 len 8388608 PASSED 00:09:29.532 free 0x2000005fffc0 8388608 00:09:29.532 unregister 0x200000400000 10485760 PASSED 00:09:29.532 passed 00:09:29.532 00:09:29.532 Run Summary: Type Total Ran Passed Failed Inactive 00:09:29.532 suites 1 1 n/a 0 0 00:09:29.532 tests 1 1 1 0 0 00:09:29.532 asserts 15 15 15 0 n/a 00:09:29.532 00:09:29.532 Elapsed time = 0.054 seconds 00:09:29.532 00:09:29.532 real 0m0.287s 00:09:29.532 user 0m0.123s 00:09:29.532 sys 0m0.061s 00:09:29.532 08:36:04 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.532 08:36:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:29.532 ************************************ 00:09:29.532 END TEST env_mem_callbacks 00:09:29.532 ************************************ 00:09:29.532 08:36:04 env -- common/autotest_common.sh@1142 -- # return 0 00:09:29.532 00:09:29.532 real 0m8.515s 00:09:29.532 user 0m6.863s 00:09:29.532 sys 0m1.273s 00:09:29.532 08:36:04 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.532 08:36:04 env -- common/autotest_common.sh@10 -- # set +x 00:09:29.532 ************************************ 00:09:29.532 END TEST env 00:09:29.532 ************************************ 00:09:29.532 08:36:04 -- common/autotest_common.sh@1142 -- # return 0 00:09:29.532 08:36:04 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:29.532 08:36:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:29.532 08:36:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.532 08:36:04 -- common/autotest_common.sh@10 -- # set +x 00:09:29.532 ************************************ 00:09:29.532 START TEST rpc 00:09:29.532 ************************************ 00:09:29.532 08:36:04 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:29.532 * Looking for test storage... 00:09:29.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:29.532 08:36:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=111142 00:09:29.532 08:36:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:29.532 08:36:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:29.532 08:36:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 111142 00:09:29.790 08:36:04 rpc -- common/autotest_common.sh@829 -- # '[' -z 111142 ']' 00:09:29.790 08:36:04 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.790 08:36:04 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.790 08:36:04 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.790 08:36:04 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.790 08:36:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.790 [2024-07-12 08:36:04.819469] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:09:29.790 [2024-07-12 08:36:04.819895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111142 ] 00:09:30.048 [2024-07-12 08:36:04.991928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.048 [2024-07-12 08:36:05.198741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:30.048 [2024-07-12 08:36:05.199006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 111142' to capture a snapshot of events at runtime. 00:09:30.048 [2024-07-12 08:36:05.199158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.048 [2024-07-12 08:36:05.199218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.048 [2024-07-12 08:36:05.199355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid111142 for offline analysis/debug. 00:09:30.048 [2024-07-12 08:36:05.199498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.983 08:36:05 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.983 08:36:05 rpc -- common/autotest_common.sh@862 -- # return 0 00:09:30.983 08:36:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:30.983 08:36:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:30.983 08:36:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:30.983 08:36:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:30.983 08:36:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.983 08:36:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.983 08:36:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 ************************************ 00:09:30.983 START TEST rpc_integrity 00:09:30.983 ************************************ 00:09:30.983 08:36:05 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:30.983 08:36:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:30.983 08:36:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.983 08:36:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 08:36:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.983 08:36:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:30.983 08:36:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:30.983 { 00:09:30.983 "name": "Malloc0", 00:09:30.983 "aliases": [ 00:09:30.983 "2ed30bb5-4c49-4cc9-82a9-98215dc6b9ab" 00:09:30.983 ], 00:09:30.983 "product_name": "Malloc disk", 00:09:30.983 "block_size": 512, 00:09:30.983 "num_blocks": 16384, 00:09:30.983 "uuid": "2ed30bb5-4c49-4cc9-82a9-98215dc6b9ab", 00:09:30.983 "assigned_rate_limits": { 00:09:30.983 "rw_ios_per_sec": 0, 00:09:30.983 "rw_mbytes_per_sec": 0, 00:09:30.983 "r_mbytes_per_sec": 0, 00:09:30.983 "w_mbytes_per_sec": 0 00:09:30.983 }, 00:09:30.983 "claimed": false, 00:09:30.983 "zoned": false, 00:09:30.983 "supported_io_types": { 00:09:30.983 "read": true, 00:09:30.983 "write": true, 00:09:30.983 "unmap": true, 00:09:30.983 "flush": true, 00:09:30.983 "reset": true, 00:09:30.983 "nvme_admin": false, 00:09:30.983 "nvme_io": false, 00:09:30.983 "nvme_io_md": false, 00:09:30.983 "write_zeroes": true, 00:09:30.983 "zcopy": true, 00:09:30.983 "get_zone_info": false, 00:09:30.983 "zone_management": false, 00:09:30.983 "zone_append": false, 00:09:30.983 "compare": false, 00:09:30.983 "compare_and_write": false, 00:09:30.983 "abort": true, 00:09:30.983 "seek_hole": false, 00:09:30.983 "seek_data": false, 00:09:30.983 "copy": true, 00:09:30.983 "nvme_iov_md": false 00:09:30.983 }, 00:09:30.983 "memory_domains": [ 00:09:30.983 { 00:09:30.983 "dma_device_id": "system", 00:09:30.983 "dma_device_type": 1 00:09:30.983 }, 00:09:30.983 { 00:09:30.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.983 "dma_device_type": 2 00:09:30.983 } 00:09:30.983 ], 00:09:30.983 "driver_specific": {} 00:09:30.983 } 00:09:30.983 ]' 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 [2024-07-12 08:36:06.133509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:30.983 [2024-07-12 08:36:06.133756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.983 [2024-07-12 08:36:06.133859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:30.983 [2024-07-12 08:36:06.134069] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.983 [2024-07-12 08:36:06.136830] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.983 [2024-07-12 08:36:06.137026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:30.983 Passthru0 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.983 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:30.983 { 00:09:30.983 "name": "Malloc0", 00:09:30.983 "aliases": [ 00:09:30.983 "2ed30bb5-4c49-4cc9-82a9-98215dc6b9ab" 00:09:30.983 ], 00:09:30.983 "product_name": "Malloc disk", 00:09:30.983 "block_size": 512, 00:09:30.983 "num_blocks": 16384, 00:09:30.983 "uuid": "2ed30bb5-4c49-4cc9-82a9-98215dc6b9ab", 00:09:30.983 "assigned_rate_limits": { 00:09:30.983 "rw_ios_per_sec": 0, 00:09:30.983 "rw_mbytes_per_sec": 0, 00:09:30.983 "r_mbytes_per_sec": 0, 00:09:30.983 "w_mbytes_per_sec": 0 00:09:30.983 }, 00:09:30.983 "claimed": true, 00:09:30.983 "claim_type": "exclusive_write", 00:09:30.983 "zoned": false, 00:09:30.983 "supported_io_types": { 00:09:30.983 "read": true, 00:09:30.983 "write": true, 00:09:30.983 "unmap": true, 00:09:30.983 "flush": true, 00:09:30.983 "reset": true, 00:09:30.983 "nvme_admin": false, 00:09:30.983 "nvme_io": false, 00:09:30.983 "nvme_io_md": false, 00:09:30.983 "write_zeroes": true, 00:09:30.983 "zcopy": true, 00:09:30.983 "get_zone_info": false, 00:09:30.983 "zone_management": false, 00:09:30.983 "zone_append": false, 00:09:30.983 "compare": false, 00:09:30.983 "compare_and_write": false, 00:09:30.983 "abort": true, 00:09:30.983 "seek_hole": false, 00:09:30.983 "seek_data": false, 00:09:30.983 "copy": true, 00:09:30.983 "nvme_iov_md": false 00:09:30.983 }, 00:09:30.983 "memory_domains": [ 00:09:30.983 { 00:09:30.983 "dma_device_id": "system", 00:09:30.983 "dma_device_type": 1 00:09:30.983 }, 00:09:30.983 { 00:09:30.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.983 "dma_device_type": 2 00:09:30.983 } 00:09:30.983 ], 00:09:30.983 "driver_specific": {} 00:09:30.983 }, 00:09:30.983 { 00:09:30.983 "name": "Passthru0", 00:09:30.983 "aliases": [ 00:09:30.983 "ef099937-22d2-5480-95bf-3ba9e6c12f3a" 00:09:30.983 ], 00:09:30.983 "product_name": "passthru", 00:09:30.983 "block_size": 512, 00:09:30.983 "num_blocks": 16384, 00:09:30.983 "uuid": "ef099937-22d2-5480-95bf-3ba9e6c12f3a", 00:09:30.983 "assigned_rate_limits": { 00:09:30.983 "rw_ios_per_sec": 0, 00:09:30.983 "rw_mbytes_per_sec": 0, 00:09:30.983 "r_mbytes_per_sec": 0, 00:09:30.983 "w_mbytes_per_sec": 0 00:09:30.983 }, 00:09:30.983 "claimed": false, 00:09:30.983 "zoned": false, 00:09:30.983 "supported_io_types": { 00:09:30.983 "read": true, 00:09:30.983 "write": true, 00:09:30.983 "unmap": true, 00:09:30.983 "flush": true, 00:09:30.983 "reset": true, 00:09:30.983 "nvme_admin": false, 00:09:30.983 "nvme_io": false, 00:09:30.983 "nvme_io_md": false, 00:09:30.983 "write_zeroes": true, 00:09:30.983 "zcopy": true, 00:09:30.983 "get_zone_info": false, 00:09:30.983 "zone_management": false, 00:09:30.983 "zone_append": false, 00:09:30.983 "compare": false, 00:09:30.983 "compare_and_write": false, 00:09:30.983 "abort": true, 00:09:30.983 "seek_hole": false, 00:09:30.983 "seek_data": false, 00:09:30.983 "copy": true, 00:09:30.983 "nvme_iov_md": false 00:09:30.983 }, 00:09:30.983 "memory_domains": [ 00:09:30.984 { 00:09:30.984 "dma_device_id": "system", 00:09:30.984 "dma_device_type": 1 00:09:30.984 }, 00:09:30.984 { 00:09:30.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.984 "dma_device_type": 2 00:09:30.984 } 00:09:30.984 ], 00:09:30.984 "driver_specific": { 00:09:30.984 "passthru": { 00:09:30.984 "name": "Passthru0", 00:09:30.984 "base_bdev_name": "Malloc0" 00:09:30.984 } 00:09:30.984 } 00:09:30.984 } 00:09:30.984 ]' 00:09:30.984 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:31.242 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:31.242 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.242 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.242 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.242 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:31.242 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:31.242 08:36:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:31.242 ************************************ 00:09:31.242 END TEST rpc_integrity 00:09:31.242 ************************************ 00:09:31.242 00:09:31.242 real 0m0.355s 00:09:31.242 user 0m0.231s 00:09:31.242 sys 0m0.020s 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.242 08:36:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 08:36:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:31.242 08:36:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:31.242 08:36:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.242 08:36:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.242 08:36:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 ************************************ 00:09:31.242 START TEST rpc_plugins 00:09:31.242 ************************************ 00:09:31.242 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:09:31.242 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:31.242 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.242 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.242 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:31.242 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:31.242 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.242 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:31.242 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.242 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:31.242 { 00:09:31.242 "name": "Malloc1", 00:09:31.242 "aliases": [ 00:09:31.242 "16cc2ca9-b2ca-4a27-90fb-31e3a7c652b1" 00:09:31.242 ], 00:09:31.242 "product_name": "Malloc disk", 00:09:31.242 "block_size": 4096, 00:09:31.242 "num_blocks": 256, 00:09:31.242 "uuid": "16cc2ca9-b2ca-4a27-90fb-31e3a7c652b1", 00:09:31.242 "assigned_rate_limits": { 00:09:31.242 "rw_ios_per_sec": 0, 00:09:31.242 "rw_mbytes_per_sec": 0, 00:09:31.242 "r_mbytes_per_sec": 0, 00:09:31.242 "w_mbytes_per_sec": 0 00:09:31.242 }, 00:09:31.242 "claimed": false, 00:09:31.242 "zoned": false, 00:09:31.242 "supported_io_types": { 00:09:31.242 "read": true, 00:09:31.242 "write": true, 00:09:31.242 "unmap": true, 00:09:31.242 "flush": true, 00:09:31.242 "reset": true, 00:09:31.242 "nvme_admin": false, 00:09:31.242 "nvme_io": false, 00:09:31.242 "nvme_io_md": false, 00:09:31.242 "write_zeroes": true, 00:09:31.242 "zcopy": true, 00:09:31.242 "get_zone_info": false, 00:09:31.242 "zone_management": false, 00:09:31.242 "zone_append": false, 00:09:31.242 "compare": false, 00:09:31.242 "compare_and_write": false, 00:09:31.242 "abort": true, 00:09:31.242 "seek_hole": false, 00:09:31.242 "seek_data": false, 00:09:31.242 "copy": true, 00:09:31.242 "nvme_iov_md": false 00:09:31.242 }, 00:09:31.242 "memory_domains": [ 00:09:31.242 { 00:09:31.242 "dma_device_id": "system", 00:09:31.242 "dma_device_type": 1 00:09:31.242 }, 00:09:31.242 { 00:09:31.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.242 "dma_device_type": 2 00:09:31.242 } 00:09:31.242 ], 00:09:31.242 "driver_specific": {} 00:09:31.242 } 00:09:31.242 ]' 00:09:31.242 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:31.501 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:31.501 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.501 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.501 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:31.501 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:31.501 08:36:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:31.501 00:09:31.501 real 0m0.172s 00:09:31.501 user 0m0.116s 00:09:31.501 sys 0m0.015s 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.501 08:36:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 ************************************ 00:09:31.501 END TEST rpc_plugins 00:09:31.501 ************************************ 00:09:31.501 08:36:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:31.501 08:36:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:31.501 08:36:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.501 08:36:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.501 08:36:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 ************************************ 00:09:31.501 START TEST rpc_trace_cmd_test 00:09:31.501 ************************************ 00:09:31.501 08:36:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:09:31.501 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:31.501 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:31.501 08:36:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.501 08:36:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 08:36:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.501 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:31.501 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid111142", 00:09:31.501 "tpoint_group_mask": "0x8", 00:09:31.501 "iscsi_conn": { 00:09:31.501 "mask": "0x2", 00:09:31.501 "tpoint_mask": "0x0" 00:09:31.501 }, 00:09:31.501 "scsi": { 00:09:31.501 "mask": "0x4", 00:09:31.501 "tpoint_mask": "0x0" 00:09:31.501 }, 00:09:31.501 "bdev": { 00:09:31.501 "mask": "0x8", 00:09:31.501 "tpoint_mask": "0xffffffffffffffff" 00:09:31.501 }, 00:09:31.501 "nvmf_rdma": { 00:09:31.501 "mask": "0x10", 00:09:31.501 "tpoint_mask": "0x0" 00:09:31.501 }, 00:09:31.501 "nvmf_tcp": { 00:09:31.501 "mask": "0x20", 00:09:31.501 "tpoint_mask": "0x0" 00:09:31.501 }, 00:09:31.501 "ftl": { 00:09:31.501 "mask": "0x40", 00:09:31.501 "tpoint_mask": "0x0" 00:09:31.501 }, 00:09:31.501 "blobfs": { 00:09:31.501 "mask": "0x80", 00:09:31.501 "tpoint_mask": "0x0" 00:09:31.501 }, 00:09:31.501 "dsa": { 00:09:31.501 "mask": "0x200", 00:09:31.501 "tpoint_mask": "0x0" 00:09:31.502 }, 00:09:31.502 "thread": { 00:09:31.502 "mask": "0x400", 00:09:31.502 "tpoint_mask": "0x0" 00:09:31.502 }, 00:09:31.502 "nvme_pcie": { 00:09:31.502 "mask": "0x800", 00:09:31.502 "tpoint_mask": "0x0" 00:09:31.502 }, 00:09:31.502 "iaa": { 00:09:31.502 "mask": "0x1000", 00:09:31.502 "tpoint_mask": "0x0" 00:09:31.502 }, 00:09:31.502 "nvme_tcp": { 00:09:31.502 "mask": "0x2000", 00:09:31.502 "tpoint_mask": "0x0" 00:09:31.502 }, 00:09:31.502 "bdev_nvme": { 00:09:31.502 "mask": "0x4000", 00:09:31.502 "tpoint_mask": "0x0" 00:09:31.502 }, 00:09:31.502 "sock": { 00:09:31.502 "mask": "0x8000", 00:09:31.502 "tpoint_mask": "0x0" 00:09:31.502 } 00:09:31.502 }' 00:09:31.502 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:31.502 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:31.502 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:31.760 ************************************ 00:09:31.760 END TEST rpc_trace_cmd_test 00:09:31.760 ************************************ 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:31.760 00:09:31.760 real 0m0.283s 00:09:31.760 user 0m0.247s 00:09:31.760 sys 0m0.028s 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.760 08:36:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.760 08:36:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:31.760 08:36:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:31.760 08:36:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:31.760 08:36:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:31.760 08:36:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.760 08:36:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.760 08:36:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.760 ************************************ 00:09:31.760 START TEST rpc_daemon_integrity 00:09:31.760 ************************************ 00:09:31.760 08:36:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:31.760 08:36:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:31.760 08:36:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.760 08:36:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:31.760 08:36:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.760 08:36:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:31.760 08:36:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:32.018 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:32.018 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:32.018 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.018 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:32.019 { 00:09:32.019 "name": "Malloc2", 00:09:32.019 "aliases": [ 00:09:32.019 "543215e0-e757-4d03-bdf4-e9446698cf48" 00:09:32.019 ], 00:09:32.019 "product_name": "Malloc disk", 00:09:32.019 "block_size": 512, 00:09:32.019 "num_blocks": 16384, 00:09:32.019 "uuid": "543215e0-e757-4d03-bdf4-e9446698cf48", 00:09:32.019 "assigned_rate_limits": { 00:09:32.019 "rw_ios_per_sec": 0, 00:09:32.019 "rw_mbytes_per_sec": 0, 00:09:32.019 "r_mbytes_per_sec": 0, 00:09:32.019 "w_mbytes_per_sec": 0 00:09:32.019 }, 00:09:32.019 "claimed": false, 00:09:32.019 "zoned": false, 00:09:32.019 "supported_io_types": { 00:09:32.019 "read": true, 00:09:32.019 "write": true, 00:09:32.019 "unmap": true, 00:09:32.019 "flush": true, 00:09:32.019 "reset": true, 00:09:32.019 "nvme_admin": false, 00:09:32.019 "nvme_io": false, 00:09:32.019 "nvme_io_md": false, 00:09:32.019 "write_zeroes": true, 00:09:32.019 "zcopy": true, 00:09:32.019 "get_zone_info": false, 00:09:32.019 "zone_management": false, 00:09:32.019 "zone_append": false, 00:09:32.019 "compare": false, 00:09:32.019 "compare_and_write": false, 00:09:32.019 "abort": true, 00:09:32.019 "seek_hole": false, 00:09:32.019 "seek_data": false, 00:09:32.019 "copy": true, 00:09:32.019 "nvme_iov_md": false 00:09:32.019 }, 00:09:32.019 "memory_domains": [ 00:09:32.019 { 00:09:32.019 "dma_device_id": "system", 00:09:32.019 "dma_device_type": 1 00:09:32.019 }, 00:09:32.019 { 00:09:32.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.019 "dma_device_type": 2 00:09:32.019 } 00:09:32.019 ], 00:09:32.019 "driver_specific": {} 00:09:32.019 } 00:09:32.019 ]' 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.019 [2024-07-12 08:36:07.109115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:32.019 [2024-07-12 08:36:07.109382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.019 [2024-07-12 08:36:07.109472] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:32.019 [2024-07-12 08:36:07.109700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.019 [2024-07-12 08:36:07.112333] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.019 [2024-07-12 08:36:07.112495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:32.019 Passthru0 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:32.019 { 00:09:32.019 "name": "Malloc2", 00:09:32.019 "aliases": [ 00:09:32.019 "543215e0-e757-4d03-bdf4-e9446698cf48" 00:09:32.019 ], 00:09:32.019 "product_name": "Malloc disk", 00:09:32.019 "block_size": 512, 00:09:32.019 "num_blocks": 16384, 00:09:32.019 "uuid": "543215e0-e757-4d03-bdf4-e9446698cf48", 00:09:32.019 "assigned_rate_limits": { 00:09:32.019 "rw_ios_per_sec": 0, 00:09:32.019 "rw_mbytes_per_sec": 0, 00:09:32.019 "r_mbytes_per_sec": 0, 00:09:32.019 "w_mbytes_per_sec": 0 00:09:32.019 }, 00:09:32.019 "claimed": true, 00:09:32.019 "claim_type": "exclusive_write", 00:09:32.019 "zoned": false, 00:09:32.019 "supported_io_types": { 00:09:32.019 "read": true, 00:09:32.019 "write": true, 00:09:32.019 "unmap": true, 00:09:32.019 "flush": true, 00:09:32.019 "reset": true, 00:09:32.019 "nvme_admin": false, 00:09:32.019 "nvme_io": false, 00:09:32.019 "nvme_io_md": false, 00:09:32.019 "write_zeroes": true, 00:09:32.019 "zcopy": true, 00:09:32.019 "get_zone_info": false, 00:09:32.019 "zone_management": false, 00:09:32.019 "zone_append": false, 00:09:32.019 "compare": false, 00:09:32.019 "compare_and_write": false, 00:09:32.019 "abort": true, 00:09:32.019 "seek_hole": false, 00:09:32.019 "seek_data": false, 00:09:32.019 "copy": true, 00:09:32.019 "nvme_iov_md": false 00:09:32.019 }, 00:09:32.019 "memory_domains": [ 00:09:32.019 { 00:09:32.019 "dma_device_id": "system", 00:09:32.019 "dma_device_type": 1 00:09:32.019 }, 00:09:32.019 { 00:09:32.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.019 "dma_device_type": 2 00:09:32.019 } 00:09:32.019 ], 00:09:32.019 "driver_specific": {} 00:09:32.019 }, 00:09:32.019 { 00:09:32.019 "name": "Passthru0", 00:09:32.019 "aliases": [ 00:09:32.019 "80081f86-6dcb-5879-addd-8499c322e38e" 00:09:32.019 ], 00:09:32.019 "product_name": "passthru", 00:09:32.019 "block_size": 512, 00:09:32.019 "num_blocks": 16384, 00:09:32.019 "uuid": "80081f86-6dcb-5879-addd-8499c322e38e", 00:09:32.019 "assigned_rate_limits": { 00:09:32.019 "rw_ios_per_sec": 0, 00:09:32.019 "rw_mbytes_per_sec": 0, 00:09:32.019 "r_mbytes_per_sec": 0, 00:09:32.019 "w_mbytes_per_sec": 0 00:09:32.019 }, 00:09:32.019 "claimed": false, 00:09:32.019 "zoned": false, 00:09:32.019 "supported_io_types": { 00:09:32.019 "read": true, 00:09:32.019 "write": true, 00:09:32.019 "unmap": true, 00:09:32.019 "flush": true, 00:09:32.019 "reset": true, 00:09:32.019 "nvme_admin": false, 00:09:32.019 "nvme_io": false, 00:09:32.019 "nvme_io_md": false, 00:09:32.019 "write_zeroes": true, 00:09:32.019 "zcopy": true, 00:09:32.019 "get_zone_info": false, 00:09:32.019 "zone_management": false, 00:09:32.019 "zone_append": false, 00:09:32.019 "compare": false, 00:09:32.019 "compare_and_write": false, 00:09:32.019 "abort": true, 00:09:32.019 "seek_hole": false, 00:09:32.019 "seek_data": false, 00:09:32.019 "copy": true, 00:09:32.019 "nvme_iov_md": false 00:09:32.019 }, 00:09:32.019 "memory_domains": [ 00:09:32.019 { 00:09:32.019 "dma_device_id": "system", 00:09:32.019 "dma_device_type": 1 00:09:32.019 }, 00:09:32.019 { 00:09:32.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.019 "dma_device_type": 2 00:09:32.019 } 00:09:32.019 ], 00:09:32.019 "driver_specific": { 00:09:32.019 "passthru": { 00:09:32.019 "name": "Passthru0", 00:09:32.019 "base_bdev_name": "Malloc2" 00:09:32.019 } 00:09:32.019 } 00:09:32.019 } 00:09:32.019 ]' 00:09:32.019 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:32.277 00:09:32.277 real 0m0.410s 00:09:32.277 user 0m0.289s 00:09:32.277 sys 0m0.028s 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.277 08:36:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:32.277 ************************************ 00:09:32.278 END TEST rpc_daemon_integrity 00:09:32.278 ************************************ 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:32.278 08:36:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:32.278 08:36:07 rpc -- rpc/rpc.sh@84 -- # killprocess 111142 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@948 -- # '[' -z 111142 ']' 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@952 -- # kill -0 111142 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@953 -- # uname 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111142 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111142' 00:09:32.278 killing process with pid 111142 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@967 -- # kill 111142 00:09:32.278 08:36:07 rpc -- common/autotest_common.sh@972 -- # wait 111142 00:09:34.807 ************************************ 00:09:34.807 END TEST rpc 00:09:34.807 ************************************ 00:09:34.807 00:09:34.807 real 0m4.862s 00:09:34.807 user 0m5.789s 00:09:34.807 sys 0m0.675s 00:09:34.807 08:36:09 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.807 08:36:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 08:36:09 -- common/autotest_common.sh@1142 -- # return 0 00:09:34.807 08:36:09 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:34.807 08:36:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.807 08:36:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.807 08:36:09 -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 ************************************ 00:09:34.807 START TEST skip_rpc 00:09:34.807 ************************************ 00:09:34.807 08:36:09 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:34.807 * Looking for test storage... 00:09:34.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:34.807 08:36:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:34.807 08:36:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:34.807 08:36:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:34.807 08:36:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.807 08:36:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.807 08:36:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 ************************************ 00:09:34.807 START TEST skip_rpc 00:09:34.807 ************************************ 00:09:34.807 08:36:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:09:34.807 08:36:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=111381 00:09:34.807 08:36:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:34.807 08:36:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.807 08:36:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:34.808 [2024-07-12 08:36:09.735181] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:09:34.808 [2024-07-12 08:36:09.735627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111381 ] 00:09:34.808 [2024-07-12 08:36:09.909210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.087 [2024-07-12 08:36:10.123900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 111381 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 111381 ']' 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 111381 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111381 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.356 killing process with pid 111381 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111381' 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 111381 00:09:40.356 08:36:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 111381 00:09:41.729 ************************************ 00:09:41.729 END TEST skip_rpc 00:09:41.729 ************************************ 00:09:41.729 00:09:41.729 real 0m7.172s 00:09:41.729 user 0m6.695s 00:09:41.729 sys 0m0.384s 00:09:41.729 08:36:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.729 08:36:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.729 08:36:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:41.729 08:36:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:41.729 08:36:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:41.729 08:36:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.729 08:36:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.729 ************************************ 00:09:41.729 START TEST skip_rpc_with_json 00:09:41.729 ************************************ 00:09:41.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=111523 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 111523 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 111523 ']' 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.729 08:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.730 08:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.730 08:36:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:41.988 [2024-07-12 08:36:16.943819] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:09:41.988 [2024-07-12 08:36:16.944238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111523 ] 00:09:41.988 [2024-07-12 08:36:17.107583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.245 [2024-07-12 08:36:17.350460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:43.179 [2024-07-12 08:36:18.148147] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:43.179 request: 00:09:43.179 { 00:09:43.179 "trtype": "tcp", 00:09:43.179 "method": "nvmf_get_transports", 00:09:43.179 "req_id": 1 00:09:43.179 } 00:09:43.179 Got JSON-RPC error response 00:09:43.179 response: 00:09:43.179 { 00:09:43.179 "code": -19, 00:09:43.179 "message": "No such device" 00:09:43.179 } 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:43.179 [2024-07-12 08:36:18.156257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.179 08:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:43.180 { 00:09:43.180 "subsystems": [ 00:09:43.180 { 00:09:43.180 "subsystem": "scheduler", 00:09:43.180 "config": [ 00:09:43.180 { 00:09:43.180 "method": "framework_set_scheduler", 00:09:43.180 "params": { 00:09:43.180 "name": "static" 00:09:43.180 } 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "vmd", 00:09:43.180 "config": [] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "sock", 00:09:43.180 "config": [ 00:09:43.180 { 00:09:43.180 "method": "sock_set_default_impl", 00:09:43.180 "params": { 00:09:43.180 "impl_name": "posix" 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "sock_impl_set_options", 00:09:43.180 "params": { 00:09:43.180 "impl_name": "ssl", 00:09:43.180 "recv_buf_size": 4096, 00:09:43.180 "send_buf_size": 4096, 00:09:43.180 "enable_recv_pipe": true, 00:09:43.180 "enable_quickack": false, 00:09:43.180 "enable_placement_id": 0, 00:09:43.180 "enable_zerocopy_send_server": true, 00:09:43.180 "enable_zerocopy_send_client": false, 00:09:43.180 "zerocopy_threshold": 0, 00:09:43.180 "tls_version": 0, 00:09:43.180 "enable_ktls": false 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "sock_impl_set_options", 00:09:43.180 "params": { 00:09:43.180 "impl_name": "posix", 00:09:43.180 "recv_buf_size": 2097152, 00:09:43.180 "send_buf_size": 2097152, 00:09:43.180 "enable_recv_pipe": true, 00:09:43.180 "enable_quickack": false, 00:09:43.180 "enable_placement_id": 0, 00:09:43.180 "enable_zerocopy_send_server": true, 00:09:43.180 "enable_zerocopy_send_client": false, 00:09:43.180 "zerocopy_threshold": 0, 00:09:43.180 "tls_version": 0, 00:09:43.180 "enable_ktls": false 00:09:43.180 } 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "iobuf", 00:09:43.180 "config": [ 00:09:43.180 { 00:09:43.180 "method": "iobuf_set_options", 00:09:43.180 "params": { 00:09:43.180 "small_pool_count": 8192, 00:09:43.180 "large_pool_count": 1024, 00:09:43.180 "small_bufsize": 8192, 00:09:43.180 "large_bufsize": 135168 00:09:43.180 } 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "keyring", 00:09:43.180 "config": [] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "accel", 00:09:43.180 "config": [ 00:09:43.180 { 00:09:43.180 "method": "accel_set_options", 00:09:43.180 "params": { 00:09:43.180 "small_cache_size": 128, 00:09:43.180 "large_cache_size": 16, 00:09:43.180 "task_count": 2048, 00:09:43.180 "sequence_count": 2048, 00:09:43.180 "buf_count": 2048 00:09:43.180 } 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "bdev", 00:09:43.180 "config": [ 00:09:43.180 { 00:09:43.180 "method": "bdev_set_options", 00:09:43.180 "params": { 00:09:43.180 "bdev_io_pool_size": 65535, 00:09:43.180 "bdev_io_cache_size": 256, 00:09:43.180 "bdev_auto_examine": true, 00:09:43.180 "iobuf_small_cache_size": 128, 00:09:43.180 "iobuf_large_cache_size": 16 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "bdev_raid_set_options", 00:09:43.180 "params": { 00:09:43.180 "process_window_size_kb": 1024 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "bdev_nvme_set_options", 00:09:43.180 "params": { 00:09:43.180 "action_on_timeout": "none", 00:09:43.180 "timeout_us": 0, 00:09:43.180 "timeout_admin_us": 0, 00:09:43.180 "keep_alive_timeout_ms": 10000, 00:09:43.180 "arbitration_burst": 0, 00:09:43.180 "low_priority_weight": 0, 00:09:43.180 "medium_priority_weight": 0, 00:09:43.180 "high_priority_weight": 0, 00:09:43.180 "nvme_adminq_poll_period_us": 10000, 00:09:43.180 "nvme_ioq_poll_period_us": 0, 00:09:43.180 "io_queue_requests": 0, 00:09:43.180 "delay_cmd_submit": true, 00:09:43.180 "transport_retry_count": 4, 00:09:43.180 "bdev_retry_count": 3, 00:09:43.180 "transport_ack_timeout": 0, 00:09:43.180 "ctrlr_loss_timeout_sec": 0, 00:09:43.180 "reconnect_delay_sec": 0, 00:09:43.180 "fast_io_fail_timeout_sec": 0, 00:09:43.180 "disable_auto_failback": false, 00:09:43.180 "generate_uuids": false, 00:09:43.180 "transport_tos": 0, 00:09:43.180 "nvme_error_stat": false, 00:09:43.180 "rdma_srq_size": 0, 00:09:43.180 "io_path_stat": false, 00:09:43.180 "allow_accel_sequence": false, 00:09:43.180 "rdma_max_cq_size": 0, 00:09:43.180 "rdma_cm_event_timeout_ms": 0, 00:09:43.180 "dhchap_digests": [ 00:09:43.180 "sha256", 00:09:43.180 "sha384", 00:09:43.180 "sha512" 00:09:43.180 ], 00:09:43.180 "dhchap_dhgroups": [ 00:09:43.180 "null", 00:09:43.180 "ffdhe2048", 00:09:43.180 "ffdhe3072", 00:09:43.180 "ffdhe4096", 00:09:43.180 "ffdhe6144", 00:09:43.180 "ffdhe8192" 00:09:43.180 ] 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "bdev_nvme_set_hotplug", 00:09:43.180 "params": { 00:09:43.180 "period_us": 100000, 00:09:43.180 "enable": false 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "bdev_iscsi_set_options", 00:09:43.180 "params": { 00:09:43.180 "timeout_sec": 30 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "bdev_wait_for_examine" 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "nvmf", 00:09:43.180 "config": [ 00:09:43.180 { 00:09:43.180 "method": "nvmf_set_config", 00:09:43.180 "params": { 00:09:43.180 "discovery_filter": "match_any", 00:09:43.180 "admin_cmd_passthru": { 00:09:43.180 "identify_ctrlr": false 00:09:43.180 } 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "nvmf_set_max_subsystems", 00:09:43.180 "params": { 00:09:43.180 "max_subsystems": 1024 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "nvmf_set_crdt", 00:09:43.180 "params": { 00:09:43.180 "crdt1": 0, 00:09:43.180 "crdt2": 0, 00:09:43.180 "crdt3": 0 00:09:43.180 } 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "method": "nvmf_create_transport", 00:09:43.180 "params": { 00:09:43.180 "trtype": "TCP", 00:09:43.180 "max_queue_depth": 128, 00:09:43.180 "max_io_qpairs_per_ctrlr": 127, 00:09:43.180 "in_capsule_data_size": 4096, 00:09:43.180 "max_io_size": 131072, 00:09:43.180 "io_unit_size": 131072, 00:09:43.180 "max_aq_depth": 128, 00:09:43.180 "num_shared_buffers": 511, 00:09:43.180 "buf_cache_size": 4294967295, 00:09:43.180 "dif_insert_or_strip": false, 00:09:43.180 "zcopy": false, 00:09:43.180 "c2h_success": true, 00:09:43.180 "sock_priority": 0, 00:09:43.180 "abort_timeout_sec": 1, 00:09:43.180 "ack_timeout": 0, 00:09:43.180 "data_wr_pool_size": 0 00:09:43.180 } 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "nbd", 00:09:43.180 "config": [] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "vhost_blk", 00:09:43.180 "config": [] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "scsi", 00:09:43.180 "config": null 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "iscsi", 00:09:43.180 "config": [ 00:09:43.180 { 00:09:43.180 "method": "iscsi_set_options", 00:09:43.180 "params": { 00:09:43.180 "node_base": "iqn.2016-06.io.spdk", 00:09:43.180 "max_sessions": 128, 00:09:43.180 "max_connections_per_session": 2, 00:09:43.180 "max_queue_depth": 64, 00:09:43.180 "default_time2wait": 2, 00:09:43.180 "default_time2retain": 20, 00:09:43.180 "first_burst_length": 8192, 00:09:43.180 "immediate_data": true, 00:09:43.180 "allow_duplicated_isid": false, 00:09:43.180 "error_recovery_level": 0, 00:09:43.180 "nop_timeout": 60, 00:09:43.180 "nop_in_interval": 30, 00:09:43.180 "disable_chap": false, 00:09:43.180 "require_chap": false, 00:09:43.180 "mutual_chap": false, 00:09:43.180 "chap_group": 0, 00:09:43.180 "max_large_datain_per_connection": 64, 00:09:43.180 "max_r2t_per_connection": 4, 00:09:43.180 "pdu_pool_size": 36864, 00:09:43.180 "immediate_data_pool_size": 16384, 00:09:43.180 "data_out_pool_size": 2048 00:09:43.180 } 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 }, 00:09:43.180 { 00:09:43.180 "subsystem": "vhost_scsi", 00:09:43.180 "config": [] 00:09:43.180 } 00:09:43.180 ] 00:09:43.180 } 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 111523 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 111523 ']' 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 111523 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111523 00:09:43.180 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.181 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.181 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111523' 00:09:43.181 killing process with pid 111523 00:09:43.181 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 111523 00:09:43.181 08:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 111523 00:09:45.722 08:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=111594 00:09:45.722 08:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:45.722 08:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 111594 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 111594 ']' 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 111594 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111594 00:09:51.035 killing process with pid 111594 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111594' 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 111594 00:09:51.035 08:36:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 111594 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:52.940 ************************************ 00:09:52.940 END TEST skip_rpc_with_json 00:09:52.940 ************************************ 00:09:52.940 00:09:52.940 real 0m10.766s 00:09:52.940 user 0m10.259s 00:09:52.940 sys 0m0.896s 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:52.940 08:36:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.940 ************************************ 00:09:52.940 START TEST skip_rpc_with_delay 00:09:52.940 ************************************ 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:52.940 [2024-07-12 08:36:27.759701] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:52.940 [2024-07-12 08:36:27.760061] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:52.940 ************************************ 00:09:52.940 END TEST skip_rpc_with_delay 00:09:52.940 ************************************ 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:52.940 00:09:52.940 real 0m0.120s 00:09:52.940 user 0m0.073s 00:09:52.940 sys 0m0.046s 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.940 08:36:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:52.940 08:36:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:52.940 08:36:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:52.940 08:36:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.940 08:36:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.940 ************************************ 00:09:52.940 START TEST exit_on_failed_rpc_init 00:09:52.940 ************************************ 00:09:52.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=111735 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 111735 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 111735 ']' 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.940 08:36:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:52.940 [2024-07-12 08:36:27.935703] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:09:52.940 [2024-07-12 08:36:27.936128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111735 ] 00:09:52.940 [2024-07-12 08:36:28.104495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.199 [2024-07-12 08:36:28.308560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:54.135 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:54.135 [2024-07-12 08:36:29.161998] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:09:54.135 [2024-07-12 08:36:29.162447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111765 ] 00:09:54.394 [2024-07-12 08:36:29.334944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.394 [2024-07-12 08:36:29.527591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.394 [2024-07-12 08:36:29.528020] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:54.394 [2024-07-12 08:36:29.528213] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:54.394 [2024-07-12 08:36:29.528271] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 111735 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 111735 ']' 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 111735 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.703 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111735 00:09:54.961 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.961 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.961 killing process with pid 111735 00:09:54.961 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111735' 00:09:54.961 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 111735 00:09:54.961 08:36:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 111735 00:09:56.864 ************************************ 00:09:56.864 END TEST exit_on_failed_rpc_init 00:09:56.864 ************************************ 00:09:56.864 00:09:56.864 real 0m4.092s 00:09:56.864 user 0m4.606s 00:09:56.864 sys 0m0.622s 00:09:56.864 08:36:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.864 08:36:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:56.864 08:36:31 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:56.864 08:36:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:56.864 ************************************ 00:09:56.864 END TEST skip_rpc 00:09:56.864 ************************************ 00:09:56.864 00:09:56.864 real 0m22.431s 00:09:56.864 user 0m21.788s 00:09:56.864 sys 0m2.055s 00:09:56.864 08:36:31 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.864 08:36:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.864 08:36:32 -- common/autotest_common.sh@1142 -- # return 0 00:09:56.864 08:36:32 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:56.864 08:36:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.864 08:36:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.864 08:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:56.864 ************************************ 00:09:56.864 START TEST rpc_client 00:09:56.864 ************************************ 00:09:56.864 08:36:32 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:57.123 * Looking for test storage... 00:09:57.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:57.123 08:36:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:57.123 OK 00:09:57.123 08:36:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:57.123 00:09:57.123 real 0m0.136s 00:09:57.123 user 0m0.092s 00:09:57.123 sys 0m0.053s 00:09:57.123 08:36:32 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.123 08:36:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:57.123 ************************************ 00:09:57.123 END TEST rpc_client 00:09:57.123 ************************************ 00:09:57.123 08:36:32 -- common/autotest_common.sh@1142 -- # return 0 00:09:57.123 08:36:32 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:57.123 08:36:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:57.123 08:36:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.123 08:36:32 -- common/autotest_common.sh@10 -- # set +x 00:09:57.123 ************************************ 00:09:57.123 START TEST json_config 00:09:57.123 ************************************ 00:09:57.123 08:36:32 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:57.123 08:36:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:76f41855-6207-4fb4-928b-a76d092af487 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=76f41855-6207-4fb4-928b-a76d092af487 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.123 08:36:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.123 08:36:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.123 08:36:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.123 08:36:32 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:57.123 08:36:32 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:57.123 08:36:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:57.123 08:36:32 json_config -- paths/export.sh@5 -- # export PATH 00:09:57.123 08:36:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@47 -- # : 0 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.123 08:36:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.124 08:36:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.124 08:36:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.124 08:36:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.124 08:36:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@31 -- # app_pid=([target]="" [initiator]="") 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@32 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@33 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@34 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:57.124 INFO: JSON configuration test init 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.124 08:36:32 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:57.124 08:36:32 json_config -- json_config/common.sh@9 -- # local app=target 00:09:57.124 08:36:32 json_config -- json_config/common.sh@10 -- # shift 00:09:57.124 08:36:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:57.124 08:36:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:57.124 08:36:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:57.124 08:36:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:57.124 08:36:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:57.124 08:36:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=111941 00:09:57.124 08:36:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:57.124 Waiting for target to run... 00:09:57.124 08:36:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:57.124 08:36:32 json_config -- json_config/common.sh@25 -- # waitforlisten 111941 /var/tmp/spdk_tgt.sock 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 111941 ']' 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:57.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.124 08:36:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.382 [2024-07-12 08:36:32.415101] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:09:57.382 [2024-07-12 08:36:32.415753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111941 ] 00:09:57.952 [2024-07-12 08:36:33.028189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.212 [2024-07-12 08:36:33.217767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.212 00:09:58.212 08:36:33 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.212 08:36:33 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:58.212 08:36:33 json_config -- json_config/common.sh@26 -- # echo '' 00:09:58.212 08:36:33 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:58.212 08:36:33 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:58.212 08:36:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.212 08:36:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:58.212 08:36:33 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:58.212 08:36:33 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:58.212 08:36:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.212 08:36:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:58.469 08:36:33 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:58.469 08:36:33 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:58.469 08:36:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:59.402 08:36:34 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:59.402 08:36:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:59.402 08:36:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.403 08:36:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=("bdev_register" "bdev_unregister") 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@48 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:59.403 08:36:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:59.403 08:36:34 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:59.403 08:36:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.403 08:36:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:59.661 08:36:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.661 08:36:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:59.661 08:36:34 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:59.661 08:36:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:59.920 08:36:34 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:59.920 08:36:34 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.920 08:36:34 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.920 08:36:34 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:59.920 08:36:34 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:59.920 08:36:34 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:59.920 08:36:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:00.179 Nvme0n1p0 Nvme0n1p1 00:10:00.179 08:36:35 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:00.179 08:36:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:00.438 [2024-07-12 08:36:35.378106] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:00.438 [2024-07-12 08:36:35.378565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:00.438 00:10:00.438 08:36:35 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:00.438 08:36:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:00.438 Malloc3 00:10:00.438 08:36:35 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:00.438 08:36:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:00.697 [2024-07-12 08:36:35.815511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:00.697 [2024-07-12 08:36:35.816048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.697 [2024-07-12 08:36:35.816235] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:00.697 [2024-07-12 08:36:35.816422] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.697 [2024-07-12 08:36:35.819718] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.697 [2024-07-12 08:36:35.819947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:00.697 PTBdevFromMalloc3 00:10:00.697 08:36:35 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:00.697 08:36:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:00.956 Null0 00:10:00.956 08:36:36 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:00.956 08:36:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:01.214 Malloc0 00:10:01.214 08:36:36 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:01.214 08:36:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:01.473 Malloc1 00:10:01.473 08:36:36 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:01.473 08:36:36 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:01.732 102400+0 records in 00:10:01.732 102400+0 records out 00:10:01.732 104857600 bytes (105 MB, 100 MiB) copied, 0.280295 s, 374 MB/s 00:10:01.732 08:36:36 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:01.732 08:36:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:01.991 aio_disk 00:10:01.991 08:36:37 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:01.991 08:36:37 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:01.991 08:36:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:02.250 3f0df350-c4de-46f9-9e4a-ba2cfb11a696 00:10:02.250 08:36:37 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:02.250 08:36:37 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:02.250 08:36:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:02.509 08:36:37 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:02.509 08:36:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:02.767 08:36:37 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:02.767 08:36:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:03.025 08:36:37 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:03.025 08:36:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:4d411063-1277-4e66-a77e-40cdbe3af3fd bdev_register:b0213f37-1b71-4e51-bb78-411313f179b2 bdev_register:378fc8be-c7fa-42fd-b677-4d56f407b1f4 bdev_register:fd155cd9-b8c3-4ad1-9c0a-71fd908eca53 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:4d411063-1277-4e66-a77e-40cdbe3af3fd bdev_register:b0213f37-1b71-4e51-bb78-411313f179b2 bdev_register:378fc8be-c7fa-42fd-b677-4d56f407b1f4 bdev_register:fd155cd9-b8c3-4ad1-9c0a-71fd908eca53 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@71 -- # sort 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@72 -- # sort 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:03.284 08:36:38 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:03.284 08:36:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:4d411063-1277-4e66-a77e-40cdbe3af3fd 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:b0213f37-1b71-4e51-bb78-411313f179b2 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:378fc8be-c7fa-42fd-b677-4d56f407b1f4 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:fd155cd9-b8c3-4ad1-9c0a-71fd908eca53 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:378fc8be-c7fa-42fd-b677-4d56f407b1f4 bdev_register:4d411063-1277-4e66-a77e-40cdbe3af3fd bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b0213f37-1b71-4e51-bb78-411313f179b2 bdev_register:fd155cd9-b8c3-4ad1-9c0a-71fd908eca53 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\7\8\f\c\8\b\e\-\c\7\f\a\-\4\2\f\d\-\b\6\7\7\-\4\d\5\6\f\4\0\7\b\1\f\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\d\4\1\1\0\6\3\-\1\2\7\7\-\4\e\6\6\-\a\7\7\e\-\4\0\c\d\b\e\3\a\f\3\f\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\0\2\1\3\f\3\7\-\1\b\7\1\-\4\e\5\1\-\b\b\7\8\-\4\1\1\3\1\3\f\1\7\9\b\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\d\1\5\5\c\d\9\-\b\8\c\3\-\4\a\d\1\-\9\c\0\a\-\7\1\f\d\9\0\8\e\c\a\5\3 ]] 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@86 -- # cat 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:378fc8be-c7fa-42fd-b677-4d56f407b1f4 bdev_register:4d411063-1277-4e66-a77e-40cdbe3af3fd bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b0213f37-1b71-4e51-bb78-411313f179b2 bdev_register:fd155cd9-b8c3-4ad1-9c0a-71fd908eca53 00:10:03.543 Expected events matched: 00:10:03.543 bdev_register:378fc8be-c7fa-42fd-b677-4d56f407b1f4 00:10:03.543 bdev_register:4d411063-1277-4e66-a77e-40cdbe3af3fd 00:10:03.543 bdev_register:Malloc0 00:10:03.543 bdev_register:Malloc0p0 00:10:03.543 bdev_register:Malloc0p1 00:10:03.543 bdev_register:Malloc0p2 00:10:03.543 bdev_register:Malloc1 00:10:03.543 bdev_register:Malloc3 00:10:03.543 bdev_register:Null0 00:10:03.543 bdev_register:Nvme0n1 00:10:03.543 bdev_register:Nvme0n1p0 00:10:03.543 bdev_register:Nvme0n1p1 00:10:03.543 bdev_register:PTBdevFromMalloc3 00:10:03.543 bdev_register:aio_disk 00:10:03.543 bdev_register:b0213f37-1b71-4e51-bb78-411313f179b2 00:10:03.543 bdev_register:fd155cd9-b8c3-4ad1-9c0a-71fd908eca53 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:10:03.543 08:36:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.543 08:36:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:03.543 08:36:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.543 08:36:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:03.543 08:36:38 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:03.543 08:36:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:03.802 MallocBdevForConfigChangeCheck 00:10:03.802 08:36:38 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:03.802 08:36:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.802 08:36:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.802 08:36:38 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:03.802 08:36:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:04.060 INFO: shutting down applications... 00:10:04.060 08:36:39 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:04.060 08:36:39 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:04.060 08:36:39 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:04.060 08:36:39 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:04.060 08:36:39 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:04.319 [2024-07-12 08:36:39.377303] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:04.578 Calling clear_vhost_scsi_subsystem 00:10:04.578 Calling clear_iscsi_subsystem 00:10:04.578 Calling clear_vhost_blk_subsystem 00:10:04.578 Calling clear_nbd_subsystem 00:10:04.578 Calling clear_nvmf_subsystem 00:10:04.578 Calling clear_bdev_subsystem 00:10:04.578 08:36:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:04.578 08:36:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:10:04.578 08:36:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:04.578 08:36:39 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:04.578 08:36:39 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:04.578 08:36:39 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:04.837 08:36:39 json_config -- json_config/json_config.sh@345 -- # break 00:10:04.837 08:36:39 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:04.837 08:36:39 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:04.837 08:36:39 json_config -- json_config/common.sh@31 -- # local app=target 00:10:04.837 08:36:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:04.837 08:36:39 json_config -- json_config/common.sh@35 -- # [[ -n 111941 ]] 00:10:04.837 08:36:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 111941 00:10:04.837 08:36:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:04.837 08:36:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:04.837 08:36:39 json_config -- json_config/common.sh@41 -- # kill -0 111941 00:10:04.837 08:36:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.404 08:36:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.404 08:36:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.404 08:36:40 json_config -- json_config/common.sh@41 -- # kill -0 111941 00:10:05.404 08:36:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.971 SPDK target shutdown done 00:10:05.971 INFO: relaunching applications... 00:10:05.971 Waiting for target to run... 00:10:05.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:05.971 08:36:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.971 08:36:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.971 08:36:40 json_config -- json_config/common.sh@41 -- # kill -0 111941 00:10:05.971 08:36:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:05.971 08:36:40 json_config -- json_config/common.sh@43 -- # break 00:10:05.971 08:36:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:05.971 08:36:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:05.971 08:36:40 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:05.971 08:36:40 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:05.971 08:36:40 json_config -- json_config/common.sh@9 -- # local app=target 00:10:05.971 08:36:40 json_config -- json_config/common.sh@10 -- # shift 00:10:05.971 08:36:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:05.971 08:36:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:05.971 08:36:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:05.971 08:36:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:05.971 08:36:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:05.971 08:36:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112220 00:10:05.971 08:36:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:05.971 08:36:40 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:05.971 08:36:40 json_config -- json_config/common.sh@25 -- # waitforlisten 112220 /var/tmp/spdk_tgt.sock 00:10:05.971 08:36:40 json_config -- common/autotest_common.sh@829 -- # '[' -z 112220 ']' 00:10:05.971 08:36:40 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:05.971 08:36:40 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.971 08:36:40 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:05.971 08:36:40 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.971 08:36:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:05.971 [2024-07-12 08:36:41.008180] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:05.971 [2024-07-12 08:36:41.008648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112220 ] 00:10:06.537 [2024-07-12 08:36:41.562713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.796 [2024-07-12 08:36:41.783875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.362 [2024-07-12 08:36:42.425759] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:07.362 [2024-07-12 08:36:42.426179] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:07.362 [2024-07-12 08:36:42.433656] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:07.362 [2024-07-12 08:36:42.433864] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:07.362 [2024-07-12 08:36:42.441700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:07.362 [2024-07-12 08:36:42.441924] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:07.362 [2024-07-12 08:36:42.442086] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:07.362 [2024-07-12 08:36:42.537432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:07.362 [2024-07-12 08:36:42.537759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.362 [2024-07-12 08:36:42.537854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:07.362 [2024-07-12 08:36:42.538089] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.362 [2024-07-12 08:36:42.538664] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.362 [2024-07-12 08:36:42.538837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:07.620 00:10:07.620 INFO: Checking if target configuration is the same... 00:10:07.620 08:36:42 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.620 08:36:42 json_config -- common/autotest_common.sh@862 -- # return 0 00:10:07.620 08:36:42 json_config -- json_config/common.sh@26 -- # echo '' 00:10:07.620 08:36:42 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:07.620 08:36:42 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:07.620 08:36:42 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:07.621 08:36:42 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:07.621 08:36:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:07.621 + '[' 2 -ne 2 ']' 00:10:07.621 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:07.621 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:07.621 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:07.621 +++ basename /dev/fd/62 00:10:07.621 ++ mktemp /tmp/62.XXX 00:10:07.621 + tmp_file_1=/tmp/62.MSu 00:10:07.621 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:07.621 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:07.621 + tmp_file_2=/tmp/spdk_tgt_config.json.R4B 00:10:07.621 + ret=0 00:10:07.621 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:08.188 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:08.189 + diff -u /tmp/62.MSu /tmp/spdk_tgt_config.json.R4B 00:10:08.189 INFO: JSON config files are the same 00:10:08.189 + echo 'INFO: JSON config files are the same' 00:10:08.189 + rm /tmp/62.MSu /tmp/spdk_tgt_config.json.R4B 00:10:08.189 + exit 0 00:10:08.189 INFO: changing configuration and checking if this can be detected... 00:10:08.189 08:36:43 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:08.189 08:36:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:08.189 08:36:43 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:08.189 08:36:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:08.189 08:36:43 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:08.189 08:36:43 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:08.189 08:36:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:08.189 + '[' 2 -ne 2 ']' 00:10:08.189 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:08.189 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:08.189 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:08.189 +++ basename /dev/fd/62 00:10:08.189 ++ mktemp /tmp/62.XXX 00:10:08.189 + tmp_file_1=/tmp/62.bNy 00:10:08.189 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:08.189 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:08.189 + tmp_file_2=/tmp/spdk_tgt_config.json.2k5 00:10:08.189 + ret=0 00:10:08.446 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:08.704 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:08.704 + diff -u /tmp/62.bNy /tmp/spdk_tgt_config.json.2k5 00:10:08.704 + ret=1 00:10:08.704 + echo '=== Start of file: /tmp/62.bNy ===' 00:10:08.704 + cat /tmp/62.bNy 00:10:08.704 + echo '=== End of file: /tmp/62.bNy ===' 00:10:08.704 + echo '' 00:10:08.704 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2k5 ===' 00:10:08.704 + cat /tmp/spdk_tgt_config.json.2k5 00:10:08.704 + echo '=== End of file: /tmp/spdk_tgt_config.json.2k5 ===' 00:10:08.704 + echo '' 00:10:08.704 + rm /tmp/62.bNy /tmp/spdk_tgt_config.json.2k5 00:10:08.704 + exit 1 00:10:08.704 INFO: configuration change detected. 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:08.704 08:36:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.704 08:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@317 -- # [[ -n 112220 ]] 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:08.704 08:36:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.704 08:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:08.704 08:36:43 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:08.704 08:36:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:08.962 08:36:44 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:08.962 08:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:09.220 08:36:44 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:09.220 08:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:09.479 08:36:44 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:09.479 08:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:09.737 08:36:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:09.737 08:36:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:09.737 08:36:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:09.737 08:36:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:09.737 08:36:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:09.737 08:36:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.737 08:36:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:09.995 08:36:44 json_config -- json_config/json_config.sh@323 -- # killprocess 112220 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@948 -- # '[' -z 112220 ']' 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@952 -- # kill -0 112220 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@953 -- # uname 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112220 00:10:09.995 killing process with pid 112220 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112220' 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@967 -- # kill 112220 00:10:09.995 08:36:44 json_config -- common/autotest_common.sh@972 -- # wait 112220 00:10:10.991 08:36:46 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:10.991 08:36:46 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:10.991 08:36:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.991 08:36:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:10.991 08:36:46 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:10.991 08:36:46 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:10.991 INFO: Success 00:10:10.991 ************************************ 00:10:10.991 END TEST json_config 00:10:10.991 ************************************ 00:10:10.991 00:10:10.991 real 0m13.850s 00:10:10.991 user 0m19.783s 00:10:10.991 sys 0m2.544s 00:10:10.991 08:36:46 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.991 08:36:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:10.991 08:36:46 -- common/autotest_common.sh@1142 -- # return 0 00:10:10.991 08:36:46 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:10.991 08:36:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:10.991 08:36:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.991 08:36:46 -- common/autotest_common.sh@10 -- # set +x 00:10:10.991 ************************************ 00:10:10.991 START TEST json_config_extra_key 00:10:10.991 ************************************ 00:10:10.991 08:36:46 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:10.991 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9707134d-b367-4b3a-ab15-fc9b5e7ba61d 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9707134d-b367-4b3a-ab15-fc9b5e7ba61d 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.991 08:36:46 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.250 08:36:46 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.250 08:36:46 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.250 08:36:46 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.250 08:36:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:11.250 08:36:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:11.250 08:36:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:11.250 08:36:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:11.250 08:36:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:11.250 08:36:46 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:11.250 08:36:46 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.251 08:36:46 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.251 08:36:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.251 08:36:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.251 08:36:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.251 08:36:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.251 08:36:46 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.251 08:36:46 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=([target]="") 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=([target]='-m 0x1 -s 1024') 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:11.251 INFO: launching applications... 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:11.251 08:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=112397 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:11.251 Waiting for target to run... 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:11.251 08:36:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 112397 /var/tmp/spdk_tgt.sock 00:10:11.251 08:36:46 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 112397 ']' 00:10:11.251 08:36:46 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:11.251 08:36:46 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.251 08:36:46 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:11.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:11.251 08:36:46 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.251 08:36:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:11.251 [2024-07-12 08:36:46.271381] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:11.251 [2024-07-12 08:36:46.271763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112397 ] 00:10:11.818 [2024-07-12 08:36:46.728183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.818 [2024-07-12 08:36:46.901715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.385 00:10:12.385 INFO: shutting down applications... 00:10:12.385 08:36:47 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.385 08:36:47 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:12.385 08:36:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:12.385 08:36:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 112397 ]] 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 112397 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112397 00:10:12.385 08:36:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:12.953 08:36:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:12.953 08:36:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:12.953 08:36:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112397 00:10:12.953 08:36:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:13.520 08:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:13.520 08:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:13.520 08:36:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112397 00:10:13.520 08:36:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:13.780 08:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:13.780 08:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:13.780 08:36:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112397 00:10:13.780 08:36:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:14.347 08:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:14.347 08:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:14.347 08:36:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112397 00:10:14.347 08:36:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:14.912 08:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:14.912 08:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:14.912 08:36:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112397 00:10:14.912 08:36:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:14.912 08:36:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:14.912 08:36:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:14.912 08:36:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:14.912 SPDK target shutdown done 00:10:14.912 08:36:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:14.912 Success 00:10:14.912 00:10:14.912 real 0m3.853s 00:10:14.912 user 0m3.469s 00:10:14.912 sys 0m0.581s 00:10:14.912 08:36:49 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.912 08:36:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:14.912 ************************************ 00:10:14.912 END TEST json_config_extra_key 00:10:14.912 ************************************ 00:10:14.912 08:36:50 -- common/autotest_common.sh@1142 -- # return 0 00:10:14.912 08:36:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:14.912 08:36:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:14.912 08:36:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.912 08:36:50 -- common/autotest_common.sh@10 -- # set +x 00:10:14.912 ************************************ 00:10:14.912 START TEST alias_rpc 00:10:14.912 ************************************ 00:10:14.912 08:36:50 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:14.912 * Looking for test storage... 00:10:15.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:15.170 08:36:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:15.170 08:36:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=112506 00:10:15.170 08:36:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 112506 00:10:15.170 08:36:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:15.170 08:36:50 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 112506 ']' 00:10:15.170 08:36:50 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.170 08:36:50 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.170 08:36:50 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.170 08:36:50 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.170 08:36:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.170 [2024-07-12 08:36:50.170932] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:15.170 [2024-07-12 08:36:50.171308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112506 ] 00:10:15.170 [2024-07-12 08:36:50.330948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.429 [2024-07-12 08:36:50.539866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.365 08:36:51 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.365 08:36:51 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:16.365 08:36:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:16.625 08:36:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 112506 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 112506 ']' 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 112506 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112506 00:10:16.625 killing process with pid 112506 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112506' 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@967 -- # kill 112506 00:10:16.625 08:36:51 alias_rpc -- common/autotest_common.sh@972 -- # wait 112506 00:10:18.550 00:10:18.550 real 0m3.622s 00:10:18.550 user 0m3.906s 00:10:18.550 sys 0m0.465s 00:10:18.550 08:36:53 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.550 ************************************ 00:10:18.550 END TEST alias_rpc 00:10:18.550 ************************************ 00:10:18.550 08:36:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.550 08:36:53 -- common/autotest_common.sh@1142 -- # return 0 00:10:18.550 08:36:53 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:18.550 08:36:53 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:18.550 08:36:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:18.550 08:36:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.550 08:36:53 -- common/autotest_common.sh@10 -- # set +x 00:10:18.550 ************************************ 00:10:18.550 START TEST spdkcli_tcp 00:10:18.550 ************************************ 00:10:18.550 08:36:53 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:18.808 * Looking for test storage... 00:10:18.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=112633 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:18.808 08:36:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 112633 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 112633 ']' 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.808 08:36:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 [2024-07-12 08:36:53.847514] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:18.808 [2024-07-12 08:36:53.847888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112633 ] 00:10:19.065 [2024-07-12 08:36:54.014809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:19.065 [2024-07-12 08:36:54.219335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.065 [2024-07-12 08:36:54.219342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.998 08:36:55 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.998 08:36:55 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:10:19.998 08:36:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=112655 00:10:19.998 08:36:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:19.998 08:36:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:20.257 [ 00:10:20.257 "spdk_get_version", 00:10:20.257 "rpc_get_methods", 00:10:20.257 "keyring_get_keys", 00:10:20.257 "trace_get_info", 00:10:20.257 "trace_get_tpoint_group_mask", 00:10:20.257 "trace_disable_tpoint_group", 00:10:20.257 "trace_enable_tpoint_group", 00:10:20.257 "trace_clear_tpoint_mask", 00:10:20.257 "trace_set_tpoint_mask", 00:10:20.257 "framework_get_pci_devices", 00:10:20.257 "framework_get_config", 00:10:20.257 "framework_get_subsystems", 00:10:20.257 "iobuf_get_stats", 00:10:20.257 "iobuf_set_options", 00:10:20.257 "sock_get_default_impl", 00:10:20.257 "sock_set_default_impl", 00:10:20.257 "sock_impl_set_options", 00:10:20.257 "sock_impl_get_options", 00:10:20.257 "vmd_rescan", 00:10:20.257 "vmd_remove_device", 00:10:20.257 "vmd_enable", 00:10:20.257 "accel_get_stats", 00:10:20.257 "accel_set_options", 00:10:20.257 "accel_set_driver", 00:10:20.257 "accel_crypto_key_destroy", 00:10:20.257 "accel_crypto_keys_get", 00:10:20.257 "accel_crypto_key_create", 00:10:20.257 "accel_assign_opc", 00:10:20.257 "accel_get_module_info", 00:10:20.257 "accel_get_opc_assignments", 00:10:20.257 "notify_get_notifications", 00:10:20.257 "notify_get_types", 00:10:20.257 "bdev_get_histogram", 00:10:20.257 "bdev_enable_histogram", 00:10:20.257 "bdev_set_qos_limit", 00:10:20.257 "bdev_set_qd_sampling_period", 00:10:20.257 "bdev_get_bdevs", 00:10:20.257 "bdev_reset_iostat", 00:10:20.257 "bdev_get_iostat", 00:10:20.257 "bdev_examine", 00:10:20.257 "bdev_wait_for_examine", 00:10:20.257 "bdev_set_options", 00:10:20.257 "scsi_get_devices", 00:10:20.257 "thread_set_cpumask", 00:10:20.257 "framework_get_governor", 00:10:20.257 "framework_get_scheduler", 00:10:20.257 "framework_set_scheduler", 00:10:20.257 "framework_get_reactors", 00:10:20.257 "thread_get_io_channels", 00:10:20.257 "thread_get_pollers", 00:10:20.257 "thread_get_stats", 00:10:20.257 "framework_monitor_context_switch", 00:10:20.257 "spdk_kill_instance", 00:10:20.257 "log_enable_timestamps", 00:10:20.257 "log_get_flags", 00:10:20.257 "log_clear_flag", 00:10:20.257 "log_set_flag", 00:10:20.257 "log_get_level", 00:10:20.257 "log_set_level", 00:10:20.257 "log_get_print_level", 00:10:20.257 "log_set_print_level", 00:10:20.257 "framework_enable_cpumask_locks", 00:10:20.257 "framework_disable_cpumask_locks", 00:10:20.257 "framework_wait_init", 00:10:20.257 "framework_start_init", 00:10:20.257 "virtio_blk_create_transport", 00:10:20.257 "virtio_blk_get_transports", 00:10:20.257 "vhost_controller_set_coalescing", 00:10:20.257 "vhost_get_controllers", 00:10:20.257 "vhost_delete_controller", 00:10:20.257 "vhost_create_blk_controller", 00:10:20.257 "vhost_scsi_controller_remove_target", 00:10:20.257 "vhost_scsi_controller_add_target", 00:10:20.257 "vhost_start_scsi_controller", 00:10:20.257 "vhost_create_scsi_controller", 00:10:20.257 "nbd_get_disks", 00:10:20.257 "nbd_stop_disk", 00:10:20.257 "nbd_start_disk", 00:10:20.257 "env_dpdk_get_mem_stats", 00:10:20.257 "nvmf_stop_mdns_prr", 00:10:20.257 "nvmf_publish_mdns_prr", 00:10:20.257 "nvmf_subsystem_get_listeners", 00:10:20.257 "nvmf_subsystem_get_qpairs", 00:10:20.257 "nvmf_subsystem_get_controllers", 00:10:20.257 "nvmf_get_stats", 00:10:20.257 "nvmf_get_transports", 00:10:20.257 "nvmf_create_transport", 00:10:20.257 "nvmf_get_targets", 00:10:20.257 "nvmf_delete_target", 00:10:20.257 "nvmf_create_target", 00:10:20.257 "nvmf_subsystem_allow_any_host", 00:10:20.257 "nvmf_subsystem_remove_host", 00:10:20.257 "nvmf_subsystem_add_host", 00:10:20.257 "nvmf_ns_remove_host", 00:10:20.257 "nvmf_ns_add_host", 00:10:20.257 "nvmf_subsystem_remove_ns", 00:10:20.257 "nvmf_subsystem_add_ns", 00:10:20.257 "nvmf_subsystem_listener_set_ana_state", 00:10:20.257 "nvmf_discovery_get_referrals", 00:10:20.257 "nvmf_discovery_remove_referral", 00:10:20.257 "nvmf_discovery_add_referral", 00:10:20.257 "nvmf_subsystem_remove_listener", 00:10:20.257 "nvmf_subsystem_add_listener", 00:10:20.257 "nvmf_delete_subsystem", 00:10:20.257 "nvmf_create_subsystem", 00:10:20.257 "nvmf_get_subsystems", 00:10:20.257 "nvmf_set_crdt", 00:10:20.257 "nvmf_set_config", 00:10:20.258 "nvmf_set_max_subsystems", 00:10:20.258 "iscsi_get_histogram", 00:10:20.258 "iscsi_enable_histogram", 00:10:20.258 "iscsi_set_options", 00:10:20.258 "iscsi_get_auth_groups", 00:10:20.258 "iscsi_auth_group_remove_secret", 00:10:20.258 "iscsi_auth_group_add_secret", 00:10:20.258 "iscsi_delete_auth_group", 00:10:20.258 "iscsi_create_auth_group", 00:10:20.258 "iscsi_set_discovery_auth", 00:10:20.258 "iscsi_get_options", 00:10:20.258 "iscsi_target_node_request_logout", 00:10:20.258 "iscsi_target_node_set_redirect", 00:10:20.258 "iscsi_target_node_set_auth", 00:10:20.258 "iscsi_target_node_add_lun", 00:10:20.258 "iscsi_get_stats", 00:10:20.258 "iscsi_get_connections", 00:10:20.258 "iscsi_portal_group_set_auth", 00:10:20.258 "iscsi_start_portal_group", 00:10:20.258 "iscsi_delete_portal_group", 00:10:20.258 "iscsi_create_portal_group", 00:10:20.258 "iscsi_get_portal_groups", 00:10:20.258 "iscsi_delete_target_node", 00:10:20.258 "iscsi_target_node_remove_pg_ig_maps", 00:10:20.258 "iscsi_target_node_add_pg_ig_maps", 00:10:20.258 "iscsi_create_target_node", 00:10:20.258 "iscsi_get_target_nodes", 00:10:20.258 "iscsi_delete_initiator_group", 00:10:20.258 "iscsi_initiator_group_remove_initiators", 00:10:20.258 "iscsi_initiator_group_add_initiators", 00:10:20.258 "iscsi_create_initiator_group", 00:10:20.258 "iscsi_get_initiator_groups", 00:10:20.258 "keyring_linux_set_options", 00:10:20.258 "keyring_file_remove_key", 00:10:20.258 "keyring_file_add_key", 00:10:20.258 "iaa_scan_accel_module", 00:10:20.258 "dsa_scan_accel_module", 00:10:20.258 "ioat_scan_accel_module", 00:10:20.258 "accel_error_inject_error", 00:10:20.258 "bdev_iscsi_delete", 00:10:20.258 "bdev_iscsi_create", 00:10:20.258 "bdev_iscsi_set_options", 00:10:20.258 "bdev_virtio_attach_controller", 00:10:20.258 "bdev_virtio_scsi_get_devices", 00:10:20.258 "bdev_virtio_detach_controller", 00:10:20.258 "bdev_virtio_blk_set_hotplug", 00:10:20.258 "bdev_ftl_set_property", 00:10:20.258 "bdev_ftl_get_properties", 00:10:20.258 "bdev_ftl_get_stats", 00:10:20.258 "bdev_ftl_unmap", 00:10:20.258 "bdev_ftl_unload", 00:10:20.258 "bdev_ftl_delete", 00:10:20.258 "bdev_ftl_load", 00:10:20.258 "bdev_ftl_create", 00:10:20.258 "bdev_aio_delete", 00:10:20.258 "bdev_aio_rescan", 00:10:20.258 "bdev_aio_create", 00:10:20.258 "blobfs_create", 00:10:20.258 "blobfs_detect", 00:10:20.258 "blobfs_set_cache_size", 00:10:20.258 "bdev_zone_block_delete", 00:10:20.258 "bdev_zone_block_create", 00:10:20.258 "bdev_delay_delete", 00:10:20.258 "bdev_delay_create", 00:10:20.258 "bdev_delay_update_latency", 00:10:20.258 "bdev_split_delete", 00:10:20.258 "bdev_split_create", 00:10:20.258 "bdev_error_inject_error", 00:10:20.258 "bdev_error_delete", 00:10:20.258 "bdev_error_create", 00:10:20.258 "bdev_raid_set_options", 00:10:20.258 "bdev_raid_remove_base_bdev", 00:10:20.258 "bdev_raid_add_base_bdev", 00:10:20.258 "bdev_raid_delete", 00:10:20.258 "bdev_raid_create", 00:10:20.258 "bdev_raid_get_bdevs", 00:10:20.258 "bdev_lvol_set_parent_bdev", 00:10:20.258 "bdev_lvol_set_parent", 00:10:20.258 "bdev_lvol_check_shallow_copy", 00:10:20.258 "bdev_lvol_start_shallow_copy", 00:10:20.258 "bdev_lvol_grow_lvstore", 00:10:20.258 "bdev_lvol_get_lvols", 00:10:20.258 "bdev_lvol_get_lvstores", 00:10:20.258 "bdev_lvol_delete", 00:10:20.258 "bdev_lvol_set_read_only", 00:10:20.258 "bdev_lvol_resize", 00:10:20.258 "bdev_lvol_decouple_parent", 00:10:20.258 "bdev_lvol_inflate", 00:10:20.258 "bdev_lvol_rename", 00:10:20.258 "bdev_lvol_clone_bdev", 00:10:20.258 "bdev_lvol_clone", 00:10:20.258 "bdev_lvol_snapshot", 00:10:20.258 "bdev_lvol_create", 00:10:20.258 "bdev_lvol_delete_lvstore", 00:10:20.258 "bdev_lvol_rename_lvstore", 00:10:20.258 "bdev_lvol_create_lvstore", 00:10:20.258 "bdev_passthru_delete", 00:10:20.258 "bdev_passthru_create", 00:10:20.258 "bdev_nvme_cuse_unregister", 00:10:20.258 "bdev_nvme_cuse_register", 00:10:20.258 "bdev_opal_new_user", 00:10:20.258 "bdev_opal_set_lock_state", 00:10:20.258 "bdev_opal_delete", 00:10:20.258 "bdev_opal_get_info", 00:10:20.258 "bdev_opal_create", 00:10:20.258 "bdev_nvme_opal_revert", 00:10:20.258 "bdev_nvme_opal_init", 00:10:20.258 "bdev_nvme_send_cmd", 00:10:20.258 "bdev_nvme_get_path_iostat", 00:10:20.258 "bdev_nvme_get_mdns_discovery_info", 00:10:20.258 "bdev_nvme_stop_mdns_discovery", 00:10:20.258 "bdev_nvme_start_mdns_discovery", 00:10:20.258 "bdev_nvme_set_multipath_policy", 00:10:20.258 "bdev_nvme_set_preferred_path", 00:10:20.258 "bdev_nvme_get_io_paths", 00:10:20.258 "bdev_nvme_remove_error_injection", 00:10:20.258 "bdev_nvme_add_error_injection", 00:10:20.258 "bdev_nvme_get_discovery_info", 00:10:20.258 "bdev_nvme_stop_discovery", 00:10:20.258 "bdev_nvme_start_discovery", 00:10:20.258 "bdev_nvme_get_controller_health_info", 00:10:20.258 "bdev_nvme_disable_controller", 00:10:20.258 "bdev_nvme_enable_controller", 00:10:20.258 "bdev_nvme_reset_controller", 00:10:20.258 "bdev_nvme_get_transport_statistics", 00:10:20.258 "bdev_nvme_apply_firmware", 00:10:20.258 "bdev_nvme_detach_controller", 00:10:20.258 "bdev_nvme_get_controllers", 00:10:20.258 "bdev_nvme_attach_controller", 00:10:20.258 "bdev_nvme_set_hotplug", 00:10:20.258 "bdev_nvme_set_options", 00:10:20.258 "bdev_null_resize", 00:10:20.258 "bdev_null_delete", 00:10:20.258 "bdev_null_create", 00:10:20.258 "bdev_malloc_delete", 00:10:20.258 "bdev_malloc_create" 00:10:20.258 ] 00:10:20.258 08:36:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.258 08:36:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:20.258 08:36:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 112633 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 112633 ']' 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 112633 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112633 00:10:20.258 killing process with pid 112633 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112633' 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 112633 00:10:20.258 08:36:55 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 112633 00:10:22.787 ************************************ 00:10:22.787 END TEST spdkcli_tcp 00:10:22.787 ************************************ 00:10:22.787 00:10:22.787 real 0m3.795s 00:10:22.787 user 0m6.822s 00:10:22.787 sys 0m0.571s 00:10:22.787 08:36:57 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.787 08:36:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.787 08:36:57 -- common/autotest_common.sh@1142 -- # return 0 00:10:22.787 08:36:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:22.787 08:36:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:22.787 08:36:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.787 08:36:57 -- common/autotest_common.sh@10 -- # set +x 00:10:22.787 ************************************ 00:10:22.787 START TEST dpdk_mem_utility 00:10:22.787 ************************************ 00:10:22.787 08:36:57 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:22.787 * Looking for test storage... 00:10:22.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:22.787 08:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:22.787 08:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=112756 00:10:22.787 08:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 112756 00:10:22.787 08:36:57 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 112756 ']' 00:10:22.787 08:36:57 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.787 08:36:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.787 08:36:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.787 08:36:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.787 08:36:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:22.787 08:36:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:22.787 [2024-07-12 08:36:57.680570] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:22.787 [2024-07-12 08:36:57.681440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112756 ] 00:10:22.787 [2024-07-12 08:36:57.841304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.046 [2024-07-12 08:36:58.044632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.612 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.612 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:10:23.612 08:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:23.612 08:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:23.612 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.612 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:23.873 { 00:10:23.873 "filename": "/tmp/spdk_mem_dump.txt" 00:10:23.873 } 00:10:23.873 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.873 08:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:23.873 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:23.873 1 heaps totaling size 820.000000 MiB 00:10:23.873 size: 820.000000 MiB heap id: 0 00:10:23.873 end heaps---------- 00:10:23.873 8 mempools totaling size 598.116089 MiB 00:10:23.873 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:23.873 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:23.873 size: 84.521057 MiB name: bdev_io_112756 00:10:23.873 size: 51.011292 MiB name: evtpool_112756 00:10:23.873 size: 50.003479 MiB name: msgpool_112756 00:10:23.873 size: 21.763794 MiB name: PDU_Pool 00:10:23.873 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:23.873 size: 0.026123 MiB name: Session_Pool 00:10:23.873 end mempools------- 00:10:23.873 6 memzones totaling size 4.142822 MiB 00:10:23.873 size: 1.000366 MiB name: RG_ring_0_112756 00:10:23.873 size: 1.000366 MiB name: RG_ring_1_112756 00:10:23.873 size: 1.000366 MiB name: RG_ring_4_112756 00:10:23.873 size: 1.000366 MiB name: RG_ring_5_112756 00:10:23.873 size: 0.125366 MiB name: RG_ring_2_112756 00:10:23.873 size: 0.015991 MiB name: RG_ring_3_112756 00:10:23.873 end memzones------- 00:10:23.873 08:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:23.873 heap id: 0 total size: 820.000000 MiB number of busy elements: 226 number of free elements: 18 00:10:23.873 list of free elements. size: 18.469727 MiB 00:10:23.873 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:23.873 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:23.873 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:23.873 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:23.873 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:23.873 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:23.873 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:23.873 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:23.873 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:23.873 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:23.873 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:23.873 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:23.873 element at address: 0x20001b000000 with size: 0.561218 MiB 00:10:23.873 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:23.873 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:23.873 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:23.873 element at address: 0x200028400000 with size: 0.399719 MiB 00:10:23.873 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:23.873 list of standard malloc elements. size: 199.265869 MiB 00:10:23.873 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:23.873 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:23.873 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:23.873 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:23.873 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:23.873 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:23.873 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:23.873 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:23.873 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:23.873 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:23.873 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:23.873 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:23.873 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:23.873 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:23.874 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846d300 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:23.874 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:23.875 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:23.875 list of memzone associated elements. size: 602.264404 MiB 00:10:23.875 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:23.875 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:23.875 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:23.875 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:23.875 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:23.875 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_112756_0 00:10:23.875 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:23.875 associated memzone info: size: 48.002930 MiB name: MP_evtpool_112756_0 00:10:23.875 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:23.875 associated memzone info: size: 48.002930 MiB name: MP_msgpool_112756_0 00:10:23.875 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:23.875 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:23.875 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:23.875 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:23.875 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:23.875 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_112756 00:10:23.875 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:23.875 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_112756 00:10:23.875 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:23.875 associated memzone info: size: 1.007996 MiB name: MP_evtpool_112756 00:10:23.875 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:23.875 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:23.875 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:23.875 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:23.875 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:23.875 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:23.875 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:23.875 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:23.875 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:23.875 associated memzone info: size: 1.000366 MiB name: RG_ring_0_112756 00:10:23.875 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:23.875 associated memzone info: size: 1.000366 MiB name: RG_ring_1_112756 00:10:23.875 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:23.875 associated memzone info: size: 1.000366 MiB name: RG_ring_4_112756 00:10:23.875 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:23.875 associated memzone info: size: 1.000366 MiB name: RG_ring_5_112756 00:10:23.875 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:23.875 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_112756 00:10:23.875 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:23.875 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:23.875 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:23.875 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:23.875 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:23.875 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:23.875 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:23.875 associated memzone info: size: 0.125366 MiB name: RG_ring_2_112756 00:10:23.875 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:23.875 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:23.875 element at address: 0x200028466740 with size: 0.023804 MiB 00:10:23.875 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:23.875 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:23.875 associated memzone info: size: 0.015991 MiB name: RG_ring_3_112756 00:10:23.875 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:10:23.875 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:23.875 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:23.875 associated memzone info: size: 0.000183 MiB name: MP_msgpool_112756 00:10:23.875 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:23.875 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_112756 00:10:23.875 element at address: 0x20002846d400 with size: 0.000366 MiB 00:10:23.875 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:23.875 08:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:23.875 08:36:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 112756 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 112756 ']' 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 112756 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112756 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.875 killing process with pid 112756 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112756' 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 112756 00:10:23.875 08:36:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 112756 00:10:26.407 ************************************ 00:10:26.407 END TEST dpdk_mem_utility 00:10:26.407 ************************************ 00:10:26.407 00:10:26.407 real 0m3.443s 00:10:26.407 user 0m3.488s 00:10:26.407 sys 0m0.464s 00:10:26.407 08:37:00 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.407 08:37:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:26.407 08:37:01 -- common/autotest_common.sh@1142 -- # return 0 00:10:26.407 08:37:01 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:26.407 08:37:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.407 08:37:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.407 08:37:01 -- common/autotest_common.sh@10 -- # set +x 00:10:26.407 ************************************ 00:10:26.407 START TEST event 00:10:26.407 ************************************ 00:10:26.407 08:37:01 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:26.407 * Looking for test storage... 00:10:26.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:26.407 08:37:01 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:26.407 08:37:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:26.407 08:37:01 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:26.407 08:37:01 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:26.407 08:37:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.407 08:37:01 event -- common/autotest_common.sh@10 -- # set +x 00:10:26.407 ************************************ 00:10:26.407 START TEST event_perf 00:10:26.407 ************************************ 00:10:26.407 08:37:01 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:26.407 Running I/O for 1 seconds...[2024-07-12 08:37:01.177291] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:26.407 [2024-07-12 08:37:01.177571] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112870 ] 00:10:26.407 [2024-07-12 08:37:01.358923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.407 [2024-07-12 08:37:01.567753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.407 [2024-07-12 08:37:01.567819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.407 Running I/O for 1 seconds...[2024-07-12 08:37:01.567930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.407 [2024-07-12 08:37:01.567944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.781 00:10:27.781 lcore 0: 194079 00:10:27.781 lcore 1: 194079 00:10:27.781 lcore 2: 194078 00:10:27.781 lcore 3: 194079 00:10:27.781 done. 00:10:27.781 ************************************ 00:10:27.781 END TEST event_perf 00:10:27.781 ************************************ 00:10:27.781 00:10:27.781 real 0m1.806s 00:10:27.781 user 0m4.552s 00:10:27.781 sys 0m0.144s 00:10:27.781 08:37:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.781 08:37:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:28.038 08:37:02 event -- common/autotest_common.sh@1142 -- # return 0 00:10:28.038 08:37:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:28.038 08:37:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:28.038 08:37:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.038 08:37:02 event -- common/autotest_common.sh@10 -- # set +x 00:10:28.038 ************************************ 00:10:28.038 START TEST event_reactor 00:10:28.038 ************************************ 00:10:28.038 08:37:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:28.038 [2024-07-12 08:37:03.019065] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:28.038 [2024-07-12 08:37:03.019425] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112930 ] 00:10:28.038 [2024-07-12 08:37:03.178945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.295 [2024-07-12 08:37:03.388663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.670 test_start 00:10:29.670 oneshot 00:10:29.670 tick 100 00:10:29.670 tick 100 00:10:29.670 tick 250 00:10:29.670 tick 100 00:10:29.670 tick 100 00:10:29.670 tick 100 00:10:29.670 tick 250 00:10:29.670 tick 500 00:10:29.670 tick 100 00:10:29.670 tick 100 00:10:29.670 tick 250 00:10:29.670 tick 100 00:10:29.670 tick 100 00:10:29.670 test_end 00:10:29.670 ************************************ 00:10:29.670 END TEST event_reactor 00:10:29.670 ************************************ 00:10:29.670 00:10:29.670 real 0m1.782s 00:10:29.670 user 0m1.556s 00:10:29.670 sys 0m0.125s 00:10:29.670 08:37:04 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.670 08:37:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:29.670 08:37:04 event -- common/autotest_common.sh@1142 -- # return 0 00:10:29.670 08:37:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:29.670 08:37:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:29.670 08:37:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.670 08:37:04 event -- common/autotest_common.sh@10 -- # set +x 00:10:29.670 ************************************ 00:10:29.670 START TEST event_reactor_perf 00:10:29.670 ************************************ 00:10:29.670 08:37:04 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:29.670 [2024-07-12 08:37:04.860811] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:29.670 [2024-07-12 08:37:04.861229] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112975 ] 00:10:29.929 [2024-07-12 08:37:05.030051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.187 [2024-07-12 08:37:05.237116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.563 test_start 00:10:31.563 test_end 00:10:31.563 Performance: 341451 events per second 00:10:31.563 ************************************ 00:10:31.563 END TEST event_reactor_perf 00:10:31.563 ************************************ 00:10:31.563 00:10:31.563 real 0m1.782s 00:10:31.563 user 0m1.565s 00:10:31.563 sys 0m0.116s 00:10:31.563 08:37:06 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.563 08:37:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 08:37:06 event -- common/autotest_common.sh@1142 -- # return 0 00:10:31.563 08:37:06 event -- event/event.sh@49 -- # uname -s 00:10:31.563 08:37:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:31.563 08:37:06 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:31.563 08:37:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.563 08:37:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.563 08:37:06 event -- common/autotest_common.sh@10 -- # set +x 00:10:31.563 ************************************ 00:10:31.563 START TEST event_scheduler 00:10:31.563 ************************************ 00:10:31.563 08:37:06 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:31.563 * Looking for test storage... 00:10:31.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:31.563 08:37:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:31.563 08:37:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=113046 00:10:31.563 08:37:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:31.563 08:37:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:31.563 08:37:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 113046 00:10:31.563 08:37:06 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 113046 ']' 00:10:31.563 08:37:06 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.563 08:37:06 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.563 08:37:06 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.563 08:37:06 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.563 08:37:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:31.822 [2024-07-12 08:37:06.820122] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:31.822 [2024-07-12 08:37:06.821255] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113046 ] 00:10:31.822 [2024-07-12 08:37:07.012779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.080 [2024-07-12 08:37:07.265156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.338 [2024-07-12 08:37:07.275973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.338 [2024-07-12 08:37:07.276118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.338 [2024-07-12 08:37:07.276126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.597 08:37:07 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.597 08:37:07 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:10:32.597 08:37:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:32.597 08:37:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.597 08:37:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:32.597 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:32.597 POWER: Cannot set governor of lcore 0 to userspace 00:10:32.597 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:32.597 POWER: Cannot set governor of lcore 0 to performance 00:10:32.597 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:32.597 POWER: Cannot set governor of lcore 0 to userspace 00:10:32.597 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:32.597 POWER: Cannot set governor of lcore 0 to userspace 00:10:32.597 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:32.597 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:32.597 POWER: Unable to set Power Management Environment for lcore 0 00:10:32.597 [2024-07-12 08:37:07.743011] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:32.597 [2024-07-12 08:37:07.743213] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:32.597 [2024-07-12 08:37:07.743395] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:32.597 [2024-07-12 08:37:07.743542] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:32.597 [2024-07-12 08:37:07.743685] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:32.597 [2024-07-12 08:37:07.743809] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:32.597 08:37:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.597 08:37:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:32.597 08:37:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.597 08:37:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:32.856 [2024-07-12 08:37:08.041174] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:32.856 08:37:08 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.856 08:37:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:32.856 08:37:08 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.856 08:37:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.856 08:37:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 ************************************ 00:10:33.115 START TEST scheduler_create_thread 00:10:33.115 ************************************ 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 2 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 3 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 4 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 5 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 6 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 7 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 8 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 9 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 10 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.115 08:37:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:34.051 08:37:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.051 08:37:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:34.051 08:37:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:34.051 08:37:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.051 08:37:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:35.049 08:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.049 ************************************ 00:10:35.049 END TEST scheduler_create_thread 00:10:35.049 ************************************ 00:10:35.049 00:10:35.049 real 0m2.151s 00:10:35.049 user 0m0.007s 00:10:35.049 sys 0m0.003s 00:10:35.049 08:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.049 08:37:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:10:35.307 08:37:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:35.307 08:37:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 113046 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 113046 ']' 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 113046 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113046 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:35.307 killing process with pid 113046 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113046' 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 113046 00:10:35.307 08:37:10 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 113046 00:10:35.565 [2024-07-12 08:37:10.683982] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:36.940 00:10:36.940 real 0m5.138s 00:10:36.940 user 0m8.255s 00:10:36.940 sys 0m0.435s 00:10:36.940 08:37:11 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.940 ************************************ 00:10:36.940 08:37:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:36.940 END TEST event_scheduler 00:10:36.940 ************************************ 00:10:36.940 08:37:11 event -- common/autotest_common.sh@1142 -- # return 0 00:10:36.940 08:37:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:36.940 08:37:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:36.940 08:37:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:36.940 08:37:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.940 08:37:11 event -- common/autotest_common.sh@10 -- # set +x 00:10:36.940 ************************************ 00:10:36.940 START TEST app_repeat 00:10:36.940 ************************************ 00:10:36.940 08:37:11 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:10:36.940 08:37:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.940 08:37:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:10:36.940 08:37:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:36.940 08:37:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:10:36.940 08:37:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=113187 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 113187' 00:10:36.941 Process app_repeat pid: 113187 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:36.941 spdk_app_start Round 0 00:10:36.941 08:37:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113187 /var/tmp/spdk-nbd.sock 00:10:36.941 08:37:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113187 ']' 00:10:36.941 08:37:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:36.941 08:37:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.941 08:37:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:36.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:36.941 08:37:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.941 08:37:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:36.941 [2024-07-12 08:37:11.904022] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:36.941 [2024-07-12 08:37:11.904427] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113187 ] 00:10:36.941 [2024-07-12 08:37:12.076910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:37.199 [2024-07-12 08:37:12.288893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.199 [2024-07-12 08:37:12.288895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.133 08:37:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.133 08:37:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:38.133 08:37:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:38.133 Malloc0 00:10:38.133 08:37:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:38.391 Malloc1 00:10:38.391 08:37:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.391 08:37:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:38.649 /dev/nbd0 00:10:38.649 08:37:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:38.649 08:37:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:38.649 1+0 records in 00:10:38.649 1+0 records out 00:10:38.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443306 s, 9.2 MB/s 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:38.649 08:37:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:38.649 08:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.649 08:37:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.649 08:37:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:38.908 /dev/nbd1 00:10:38.908 08:37:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:38.908 08:37:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:38.908 1+0 records in 00:10:38.908 1+0 records out 00:10:38.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530139 s, 7.7 MB/s 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:38.908 08:37:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:39.166 08:37:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:39.166 08:37:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:39.166 08:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:39.166 08:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:39.166 08:37:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:39.166 08:37:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.166 08:37:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:39.424 { 00:10:39.424 "nbd_device": "/dev/nbd0", 00:10:39.424 "bdev_name": "Malloc0" 00:10:39.424 }, 00:10:39.424 { 00:10:39.424 "nbd_device": "/dev/nbd1", 00:10:39.424 "bdev_name": "Malloc1" 00:10:39.424 } 00:10:39.424 ]' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:39.424 { 00:10:39.424 "nbd_device": "/dev/nbd0", 00:10:39.424 "bdev_name": "Malloc0" 00:10:39.424 }, 00:10:39.424 { 00:10:39.424 "nbd_device": "/dev/nbd1", 00:10:39.424 "bdev_name": "Malloc1" 00:10:39.424 } 00:10:39.424 ]' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:39.424 /dev/nbd1' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:39.424 /dev/nbd1' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:39.424 256+0 records in 00:10:39.424 256+0 records out 00:10:39.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00950437 s, 110 MB/s 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:39.424 256+0 records in 00:10:39.424 256+0 records out 00:10:39.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242353 s, 43.3 MB/s 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:39.424 256+0 records in 00:10:39.424 256+0 records out 00:10:39.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265252 s, 39.5 MB/s 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:39.424 08:37:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:39.425 08:37:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.425 08:37:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.683 08:37:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:39.943 08:37:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:39.943 08:37:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:39.943 08:37:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:39.943 08:37:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.943 08:37:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.943 08:37:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:39.943 08:37:15 event.app_repeat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.201 08:37:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:40.459 08:37:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:40.459 08:37:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:40.718 08:37:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:42.094 [2024-07-12 08:37:17.010250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.094 [2024-07-12 08:37:17.191327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.094 [2024-07-12 08:37:17.191334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.353 [2024-07-12 08:37:17.372576] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:42.353 [2024-07-12 08:37:17.372843] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:43.768 spdk_app_start Round 1 00:10:43.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:43.768 08:37:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:43.768 08:37:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:43.768 08:37:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113187 /var/tmp/spdk-nbd.sock 00:10:43.768 08:37:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113187 ']' 00:10:43.768 08:37:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:43.768 08:37:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.768 08:37:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:43.768 08:37:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.768 08:37:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:44.027 08:37:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.027 08:37:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:44.027 08:37:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:44.286 Malloc0 00:10:44.286 08:37:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:44.545 Malloc1 00:10:44.545 08:37:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.545 08:37:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:44.804 /dev/nbd0 00:10:44.804 08:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:44.804 08:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:44.804 1+0 records in 00:10:44.804 1+0 records out 00:10:44.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348017 s, 11.8 MB/s 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.804 08:37:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:44.804 08:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.804 08:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:44.804 08:37:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:45.372 /dev/nbd1 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:45.372 1+0 records in 00:10:45.372 1+0 records out 00:10:45.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279381 s, 14.7 MB/s 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:45.372 08:37:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:45.372 { 00:10:45.372 "nbd_device": "/dev/nbd0", 00:10:45.372 "bdev_name": "Malloc0" 00:10:45.372 }, 00:10:45.372 { 00:10:45.372 "nbd_device": "/dev/nbd1", 00:10:45.372 "bdev_name": "Malloc1" 00:10:45.372 } 00:10:45.372 ]' 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:45.372 { 00:10:45.372 "nbd_device": "/dev/nbd0", 00:10:45.372 "bdev_name": "Malloc0" 00:10:45.372 }, 00:10:45.372 { 00:10:45.372 "nbd_device": "/dev/nbd1", 00:10:45.372 "bdev_name": "Malloc1" 00:10:45.372 } 00:10:45.372 ]' 00:10:45.372 08:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:45.630 /dev/nbd1' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:45.630 /dev/nbd1' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:45.630 256+0 records in 00:10:45.630 256+0 records out 00:10:45.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564357 s, 186 MB/s 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:45.630 256+0 records in 00:10:45.630 256+0 records out 00:10:45.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313154 s, 33.5 MB/s 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:45.630 256+0 records in 00:10:45.630 256+0 records out 00:10:45.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276456 s, 37.9 MB/s 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.630 08:37:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.889 08:37:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:46.147 08:37:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:46.147 08:37:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:46.147 08:37:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:46.147 08:37:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.147 08:37:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.148 08:37:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:46.406 08:37:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:46.406 08:37:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:46.974 08:37:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:48.351 [2024-07-12 08:37:23.155494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.351 [2024-07-12 08:37:23.359408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.351 [2024-07-12 08:37:23.359414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.352 [2024-07-12 08:37:23.541903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:48.352 [2024-07-12 08:37:23.542258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:50.277 spdk_app_start Round 2 00:10:50.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:50.277 08:37:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:50.277 08:37:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:50.277 08:37:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 113187 /var/tmp/spdk-nbd.sock 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113187 ']' 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.277 08:37:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:50.277 08:37:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:50.536 Malloc0 00:10:50.536 08:37:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:50.794 Malloc1 00:10:50.794 08:37:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.794 08:37:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.794 08:37:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:50.794 08:37:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:50.794 08:37:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:50.794 08:37:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:50.794 08:37:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.794 08:37:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.795 08:37:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:51.100 /dev/nbd0 00:10:51.100 08:37:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:51.100 08:37:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:51.100 1+0 records in 00:10:51.100 1+0 records out 00:10:51.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530119 s, 7.7 MB/s 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:51.100 08:37:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:51.100 08:37:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.100 08:37:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.100 08:37:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:51.358 /dev/nbd1 00:10:51.358 08:37:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:51.358 08:37:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:51.358 08:37:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:51.358 08:37:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:51.358 08:37:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:51.358 08:37:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:51.358 08:37:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:51.359 1+0 records in 00:10:51.359 1+0 records out 00:10:51.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395006 s, 10.4 MB/s 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:51.359 08:37:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:51.359 08:37:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.359 08:37:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.359 08:37:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:51.359 08:37:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.359 08:37:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:51.618 08:37:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:51.618 { 00:10:51.618 "nbd_device": "/dev/nbd0", 00:10:51.618 "bdev_name": "Malloc0" 00:10:51.618 }, 00:10:51.618 { 00:10:51.618 "nbd_device": "/dev/nbd1", 00:10:51.618 "bdev_name": "Malloc1" 00:10:51.618 } 00:10:51.618 ]' 00:10:51.618 08:37:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:51.618 { 00:10:51.618 "nbd_device": "/dev/nbd0", 00:10:51.618 "bdev_name": "Malloc0" 00:10:51.618 }, 00:10:51.618 { 00:10:51.618 "nbd_device": "/dev/nbd1", 00:10:51.618 "bdev_name": "Malloc1" 00:10:51.618 } 00:10:51.618 ]' 00:10:51.618 08:37:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:51.618 08:37:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:51.618 /dev/nbd1' 00:10:51.618 08:37:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:51.618 /dev/nbd1' 00:10:51.618 08:37:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:51.619 256+0 records in 00:10:51.619 256+0 records out 00:10:51.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104177 s, 101 MB/s 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:51.619 256+0 records in 00:10:51.619 256+0 records out 00:10:51.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242967 s, 43.2 MB/s 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:51.619 256+0 records in 00:10:51.619 256+0 records out 00:10:51.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310258 s, 33.8 MB/s 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.619 08:37:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.878 08:37:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.137 08:37:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.396 08:37:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:52.655 08:37:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:52.655 08:37:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:52.655 08:37:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:52.913 08:37:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:52.913 08:37:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:53.172 08:37:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:54.549 [2024-07-12 08:37:29.377113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.549 [2024-07-12 08:37:29.573331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.549 [2024-07-12 08:37:29.573334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.807 [2024-07-12 08:37:29.753087] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:54.807 [2024-07-12 08:37:29.753229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:56.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:56.183 08:37:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 113187 /var/tmp/spdk-nbd.sock 00:10:56.183 08:37:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 113187 ']' 00:10:56.183 08:37:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:56.183 08:37:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.183 08:37:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:56.183 08:37:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.183 08:37:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:56.442 08:37:31 event.app_repeat -- event/event.sh@39 -- # killprocess 113187 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 113187 ']' 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 113187 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113187 00:10:56.442 killing process with pid 113187 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113187' 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@967 -- # kill 113187 00:10:56.442 08:37:31 event.app_repeat -- common/autotest_common.sh@972 -- # wait 113187 00:10:57.378 spdk_app_start is called in Round 0. 00:10:57.378 Shutdown signal received, stop current app iteration 00:10:57.378 Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 reinitialization... 00:10:57.378 spdk_app_start is called in Round 1. 00:10:57.378 Shutdown signal received, stop current app iteration 00:10:57.378 Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 reinitialization... 00:10:57.378 spdk_app_start is called in Round 2. 00:10:57.378 Shutdown signal received, stop current app iteration 00:10:57.378 Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 reinitialization... 00:10:57.378 spdk_app_start is called in Round 3. 00:10:57.378 Shutdown signal received, stop current app iteration 00:10:57.637 ************************************ 00:10:57.637 END TEST app_repeat 00:10:57.637 ************************************ 00:10:57.637 08:37:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:57.637 08:37:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:57.637 00:10:57.637 real 0m20.727s 00:10:57.637 user 0m44.478s 00:10:57.637 sys 0m2.760s 00:10:57.637 08:37:32 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.637 08:37:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 08:37:32 event -- common/autotest_common.sh@1142 -- # return 0 00:10:57.637 08:37:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:57.637 08:37:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:57.637 08:37:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:57.637 08:37:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.637 08:37:32 event -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 ************************************ 00:10:57.637 START TEST cpu_locks 00:10:57.637 ************************************ 00:10:57.637 08:37:32 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:57.637 * Looking for test storage... 00:10:57.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:57.637 08:37:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:57.637 08:37:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:57.637 08:37:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:57.637 08:37:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:57.637 08:37:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:57.637 08:37:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.637 08:37:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 ************************************ 00:10:57.637 START TEST default_locks 00:10:57.637 ************************************ 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=113768 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 113768 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 113768 ']' 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.637 08:37:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 [2024-07-12 08:37:32.792044] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:10:57.637 [2024-07-12 08:37:32.792296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113768 ] 00:10:57.896 [2024-07-12 08:37:32.960680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.154 [2024-07-12 08:37:33.165427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.720 08:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.720 08:37:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:10:58.720 08:37:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 113768 00:10:58.720 08:37:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 113768 00:10:58.720 08:37:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 113768 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 113768 ']' 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 113768 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113768 00:10:58.978 killing process with pid 113768 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113768' 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 113768 00:10:58.978 08:37:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 113768 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 113768 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 113768 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 113768 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 113768 ']' 00:11:01.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.586 ERROR: process (pid: 113768) is no longer running 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (113768) - No such process 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:01.586 00:11:01.586 real 0m3.517s 00:11:01.586 user 0m3.477s 00:11:01.586 sys 0m0.575s 00:11:01.586 ************************************ 00:11:01.586 END TEST default_locks 00:11:01.586 ************************************ 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.586 08:37:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.586 08:37:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:01.586 08:37:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:01.586 08:37:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:01.586 08:37:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.586 08:37:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:01.586 ************************************ 00:11:01.586 START TEST default_locks_via_rpc 00:11:01.586 ************************************ 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=113845 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 113845 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 113845 ']' 00:11:01.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.586 08:37:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.586 [2024-07-12 08:37:36.362377] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:01.586 [2024-07-12 08:37:36.362601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113845 ] 00:11:01.586 [2024-07-12 08:37:36.532590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.586 [2024-07-12 08:37:36.739221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 113845 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 113845 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 113845 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 113845 ']' 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 113845 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:02.519 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113845 00:11:02.777 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:02.777 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:02.777 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113845' 00:11:02.777 killing process with pid 113845 00:11:02.777 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 113845 00:11:02.777 08:37:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 113845 00:11:04.685 00:11:04.685 real 0m3.521s 00:11:04.685 user 0m3.573s 00:11:04.685 sys 0m0.576s 00:11:04.685 08:37:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.685 ************************************ 00:11:04.685 END TEST default_locks_via_rpc 00:11:04.685 ************************************ 00:11:04.685 08:37:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.685 08:37:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:04.685 08:37:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:04.685 08:37:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:04.685 08:37:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.685 08:37:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:04.685 ************************************ 00:11:04.685 START TEST non_locking_app_on_locked_coremask 00:11:04.685 ************************************ 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=113924 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 113924 /var/tmp/spdk.sock 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 113924 ']' 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.685 08:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:04.944 [2024-07-12 08:37:39.934305] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:04.944 [2024-07-12 08:37:39.934524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113924 ] 00:11:04.944 [2024-07-12 08:37:40.104485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.203 [2024-07-12 08:37:40.303953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=113945 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 113945 /var/tmp/spdk2.sock 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 113945 ']' 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:06.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.139 08:37:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:06.139 [2024-07-12 08:37:41.124565] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:06.139 [2024-07-12 08:37:41.124783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113945 ] 00:11:06.139 [2024-07-12 08:37:41.294590] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:06.139 [2024-07-12 08:37:41.294680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.706 [2024-07-12 08:37:41.715900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.237 08:37:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.237 08:37:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:09.237 08:37:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 113924 00:11:09.237 08:37:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 113924 00:11:09.237 08:37:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 113924 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 113924 ']' 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 113924 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113924 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113924' 00:11:09.237 killing process with pid 113924 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 113924 00:11:09.237 08:37:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 113924 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 113945 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 113945 ']' 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 113945 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113945 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113945' 00:11:13.426 killing process with pid 113945 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 113945 00:11:13.426 08:37:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 113945 00:11:15.327 00:11:15.327 real 0m10.659s 00:11:15.327 user 0m11.081s 00:11:15.327 sys 0m1.282s 00:11:15.327 08:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.327 ************************************ 00:11:15.327 END TEST non_locking_app_on_locked_coremask 00:11:15.327 ************************************ 00:11:15.327 08:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.584 08:37:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:15.584 08:37:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:15.584 08:37:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:15.584 08:37:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.584 08:37:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:15.584 ************************************ 00:11:15.584 START TEST locking_app_on_unlocked_coremask 00:11:15.584 ************************************ 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=114112 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 114112 /var/tmp/spdk.sock 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114112 ']' 00:11:15.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.584 08:37:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.584 [2024-07-12 08:37:50.634821] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:15.584 [2024-07-12 08:37:50.635045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114112 ] 00:11:15.842 [2024-07-12 08:37:50.804300] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:15.842 [2024-07-12 08:37:50.804371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.842 [2024-07-12 08:37:51.007693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=114139 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 114139 /var/tmp/spdk2.sock 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114139 ']' 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.774 08:37:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:16.774 [2024-07-12 08:37:51.810429] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:16.774 [2024-07-12 08:37:51.810612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114139 ] 00:11:16.774 [2024-07-12 08:37:51.962353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.339 [2024-07-12 08:37:52.381653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.866 08:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.866 08:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:19.866 08:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 114139 00:11:19.866 08:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114139 00:11:19.866 08:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 114112 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114112 ']' 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 114112 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114112 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:20.124 killing process with pid 114112 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114112' 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 114112 00:11:20.124 08:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 114112 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 114139 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114139 ']' 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 114139 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114139 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114139' 00:11:24.308 killing process with pid 114139 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 114139 00:11:24.308 08:37:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 114139 00:11:26.858 00:11:26.858 real 0m11.291s 00:11:26.858 user 0m11.871s 00:11:26.858 sys 0m1.307s 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:26.858 ************************************ 00:11:26.858 END TEST locking_app_on_unlocked_coremask 00:11:26.858 ************************************ 00:11:26.858 08:38:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:26.858 08:38:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:26.858 08:38:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:26.858 08:38:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.858 08:38:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:26.858 ************************************ 00:11:26.858 START TEST locking_app_on_locked_coremask 00:11:26.858 ************************************ 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=114314 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 114314 /var/tmp/spdk.sock 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114314 ']' 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.858 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.859 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.859 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.859 08:38:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:26.859 [2024-07-12 08:38:01.979278] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:26.859 [2024-07-12 08:38:01.979520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114314 ] 00:11:27.116 [2024-07-12 08:38:02.149307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.375 [2024-07-12 08:38:02.396884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=114350 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 114350 /var/tmp/spdk2.sock 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 114350 /var/tmp/spdk2.sock 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:28.311 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:28.312 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 114350 /var/tmp/spdk2.sock 00:11:28.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:28.312 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114350 ']' 00:11:28.312 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:28.312 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.312 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:28.312 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.312 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:28.312 [2024-07-12 08:38:03.307017] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:28.312 [2024-07-12 08:38:03.307337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114350 ] 00:11:28.312 [2024-07-12 08:38:03.484867] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 114314 has claimed it. 00:11:28.312 [2024-07-12 08:38:03.484981] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:28.878 ERROR: process (pid: 114350) is no longer running 00:11:28.878 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (114350) - No such process 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 114314 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114314 00:11:28.878 08:38:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 114314 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114314 ']' 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 114314 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114314 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114314' 00:11:29.136 killing process with pid 114314 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 114314 00:11:29.136 08:38:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 114314 00:11:31.666 00:11:31.666 real 0m4.747s 00:11:31.666 user 0m4.953s 00:11:31.666 sys 0m0.837s 00:11:31.666 08:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:31.666 ************************************ 00:11:31.666 END TEST locking_app_on_locked_coremask 00:11:31.666 ************************************ 00:11:31.666 08:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.666 08:38:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:31.666 08:38:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:31.666 08:38:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:31.666 08:38:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.666 08:38:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:31.666 ************************************ 00:11:31.666 START TEST locking_overlapped_coremask 00:11:31.666 ************************************ 00:11:31.666 08:38:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:11:31.666 08:38:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=114419 00:11:31.666 08:38:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:31.666 08:38:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 114419 /var/tmp/spdk.sock 00:11:31.667 08:38:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 114419 ']' 00:11:31.667 08:38:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.667 08:38:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.667 08:38:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.667 08:38:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.667 08:38:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.667 [2024-07-12 08:38:06.772873] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:31.667 [2024-07-12 08:38:06.773068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114419 ] 00:11:31.925 [2024-07-12 08:38:06.946295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.184 [2024-07-12 08:38:07.197920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.184 [2024-07-12 08:38:07.198050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.184 [2024-07-12 08:38:07.198063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=114449 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 114449 /var/tmp/spdk2.sock 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 114449 /var/tmp/spdk2.sock 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 114449 /var/tmp/spdk2.sock 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 114449 ']' 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:33.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.116 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:33.116 [2024-07-12 08:38:08.138728] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:33.116 [2024-07-12 08:38:08.138944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114449 ] 00:11:33.374 [2024-07-12 08:38:08.340433] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114419 has claimed it. 00:11:33.374 [2024-07-12 08:38:08.340825] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:33.941 ERROR: process (pid: 114449) is no longer running 00:11:33.941 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (114449) - No such process 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 114419 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 114419 ']' 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 114419 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114419 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:33.941 killing process with pid 114419 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114419' 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 114419 00:11:33.941 08:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 114419 00:11:36.471 00:11:36.471 real 0m4.506s 00:11:36.471 user 0m11.726s 00:11:36.471 sys 0m0.741s 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.471 ************************************ 00:11:36.471 END TEST locking_overlapped_coremask 00:11:36.471 ************************************ 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:36.471 08:38:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:36.471 08:38:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:36.471 08:38:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:36.471 08:38:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.471 08:38:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:36.471 ************************************ 00:11:36.471 START TEST locking_overlapped_coremask_via_rpc 00:11:36.471 ************************************ 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=114518 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 114518 /var/tmp/spdk.sock 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114518 ']' 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.471 08:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.471 [2024-07-12 08:38:11.339473] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:36.471 [2024-07-12 08:38:11.339689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114518 ] 00:11:36.471 [2024-07-12 08:38:11.521261] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:36.471 [2024-07-12 08:38:11.521352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:36.729 [2024-07-12 08:38:11.760589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.729 [2024-07-12 08:38:11.760777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.729 [2024-07-12 08:38:11.760777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=114541 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 114541 /var/tmp/spdk2.sock 00:11:37.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114541 ']' 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.662 08:38:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:37.662 [2024-07-12 08:38:12.704378] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:37.662 [2024-07-12 08:38:12.704612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114541 ] 00:11:37.920 [2024-07-12 08:38:12.896309] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:37.920 [2024-07-12 08:38:12.908864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:38.484 [2024-07-12 08:38:13.422931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.484 [2024-07-12 08:38:13.436579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.484 [2024-07-12 08:38:13.436581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.473 [2024-07-12 08:38:15.428527] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114518 has claimed it. 00:11:40.473 request: 00:11:40.473 { 00:11:40.473 "method": "framework_enable_cpumask_locks", 00:11:40.473 "req_id": 1 00:11:40.473 } 00:11:40.473 Got JSON-RPC error response 00:11:40.473 response: 00:11:40.473 { 00:11:40.473 "code": -32603, 00:11:40.473 "message": "Failed to claim CPU core: 2" 00:11:40.473 } 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 114518 /var/tmp/spdk.sock 00:11:40.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114518 ']' 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.473 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:40.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 114541 /var/tmp/spdk2.sock 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114541 ']' 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.732 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:40.990 00:11:40.990 real 0m4.677s 00:11:40.990 user 0m1.560s 00:11:40.990 sys 0m0.170s 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.990 08:38:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.990 ************************************ 00:11:40.990 END TEST locking_overlapped_coremask_via_rpc 00:11:40.990 ************************************ 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:40.990 08:38:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:40.990 08:38:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114518 ]] 00:11:40.990 08:38:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114518 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114518 ']' 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114518 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114518 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.990 killing process with pid 114518 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114518' 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 114518 00:11:40.990 08:38:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 114518 00:11:43.522 08:38:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 114541 ]] 00:11:43.522 08:38:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 114541 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114541 ']' 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114541 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114541 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:11:43.522 killing process with pid 114541 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114541' 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 114541 00:11:43.522 08:38:18 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 114541 00:11:45.423 08:38:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:45.423 08:38:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:45.423 08:38:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114518 ]] 00:11:45.423 08:38:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114518 00:11:45.423 08:38:20 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114518 ']' 00:11:45.423 08:38:20 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114518 00:11:45.423 Process with pid 114518 is not found 00:11:45.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (114518) - No such process 00:11:45.424 08:38:20 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 114518 is not found' 00:11:45.424 08:38:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 114541 ]] 00:11:45.424 08:38:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 114541 00:11:45.424 08:38:20 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114541 ']' 00:11:45.424 08:38:20 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114541 00:11:45.424 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (114541) - No such process 00:11:45.424 Process with pid 114541 is not found 00:11:45.424 08:38:20 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 114541 is not found' 00:11:45.424 08:38:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:45.424 ************************************ 00:11:45.424 END TEST cpu_locks 00:11:45.424 ************************************ 00:11:45.424 00:11:45.424 real 0m47.871s 00:11:45.424 user 1m22.445s 00:11:45.424 sys 0m6.773s 00:11:45.424 08:38:20 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.424 08:38:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:45.424 08:38:20 event -- common/autotest_common.sh@1142 -- # return 0 00:11:45.424 ************************************ 00:11:45.424 END TEST event 00:11:45.424 ************************************ 00:11:45.424 00:11:45.424 real 1m19.508s 00:11:45.424 user 2m23.068s 00:11:45.424 sys 0m10.497s 00:11:45.424 08:38:20 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.424 08:38:20 event -- common/autotest_common.sh@10 -- # set +x 00:11:45.424 08:38:20 -- common/autotest_common.sh@1142 -- # return 0 00:11:45.424 08:38:20 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:45.424 08:38:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:45.424 08:38:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.424 08:38:20 -- common/autotest_common.sh@10 -- # set +x 00:11:45.424 ************************************ 00:11:45.424 START TEST thread 00:11:45.424 ************************************ 00:11:45.424 08:38:20 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:45.682 * Looking for test storage... 00:11:45.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:45.682 08:38:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:45.682 08:38:20 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:45.682 08:38:20 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.682 08:38:20 thread -- common/autotest_common.sh@10 -- # set +x 00:11:45.682 ************************************ 00:11:45.682 START TEST thread_poller_perf 00:11:45.682 ************************************ 00:11:45.682 08:38:20 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:45.682 [2024-07-12 08:38:20.713407] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:45.682 [2024-07-12 08:38:20.714255] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114761 ] 00:11:45.952 [2024-07-12 08:38:20.877837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.209 [2024-07-12 08:38:21.151977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.209 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:47.602 ====================================== 00:11:47.602 busy:2208061916 (cyc) 00:11:47.602 total_run_count: 353000 00:11:47.602 tsc_hz: 2200000000 (cyc) 00:11:47.602 ====================================== 00:11:47.602 poller_cost: 6255 (cyc), 2843 (nsec) 00:11:47.602 00:11:47.602 real 0m1.913s 00:11:47.602 user 0m1.679s 00:11:47.602 sys 0m0.132s 00:11:47.602 ************************************ 00:11:47.602 END TEST thread_poller_perf 00:11:47.602 ************************************ 00:11:47.602 08:38:22 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.602 08:38:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:47.602 08:38:22 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:47.602 08:38:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:47.602 08:38:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:47.602 08:38:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.602 08:38:22 thread -- common/autotest_common.sh@10 -- # set +x 00:11:47.602 ************************************ 00:11:47.602 START TEST thread_poller_perf 00:11:47.602 ************************************ 00:11:47.602 08:38:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:47.602 [2024-07-12 08:38:22.684777] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:47.602 [2024-07-12 08:38:22.684981] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114805 ] 00:11:47.860 [2024-07-12 08:38:22.849465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.117 [2024-07-12 08:38:23.102515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.117 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:49.490 ====================================== 00:11:49.490 busy:2204219484 (cyc) 00:11:49.490 total_run_count: 3816000 00:11:49.490 tsc_hz: 2200000000 (cyc) 00:11:49.490 ====================================== 00:11:49.490 poller_cost: 577 (cyc), 262 (nsec) 00:11:49.490 00:11:49.490 real 0m1.891s 00:11:49.490 user 0m1.660s 00:11:49.490 sys 0m0.127s 00:11:49.490 ************************************ 00:11:49.490 08:38:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.490 08:38:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:49.490 END TEST thread_poller_perf 00:11:49.490 ************************************ 00:11:49.490 08:38:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:49.490 08:38:24 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:49.490 08:38:24 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:49.490 08:38:24 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:49.491 08:38:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.491 08:38:24 thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.491 ************************************ 00:11:49.491 START TEST thread_spdk_lock 00:11:49.491 ************************************ 00:11:49.491 08:38:24 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:49.491 [2024-07-12 08:38:24.632391] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:49.491 [2024-07-12 08:38:24.632633] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114869 ] 00:11:49.749 [2024-07-12 08:38:24.806704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:50.010 [2024-07-12 08:38:25.056745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.010 [2024-07-12 08:38:25.056732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.576 [2024-07-12 08:38:25.729871] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:50.576 [2024-07-12 08:38:25.730054] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:50.576 [2024-07-12 08:38:25.730098] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x55fc0b2f2b40 00:11:50.576 [2024-07-12 08:38:25.738529] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:50.576 [2024-07-12 08:38:25.738634] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:50.576 [2024-07-12 08:38:25.738674] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:51.143 Starting test contend 00:11:51.143 Worker Delay Wait us Hold us Total us 00:11:51.143 0 3 100117 222734 322852 00:11:51.143 1 5 32631 337212 369843 00:11:51.143 PASS test contend 00:11:51.143 Starting test hold_by_poller 00:11:51.143 PASS test hold_by_poller 00:11:51.143 Starting test hold_by_message 00:11:51.143 PASS test hold_by_message 00:11:51.143 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:51.143 100014 assertions passed 00:11:51.143 0 assertions failed 00:11:51.143 00:11:51.143 real 0m1.582s 00:11:51.143 user 0m2.021s 00:11:51.143 sys 0m0.141s 00:11:51.143 ************************************ 00:11:51.143 END TEST thread_spdk_lock 00:11:51.143 08:38:26 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.143 08:38:26 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:51.143 ************************************ 00:11:51.143 08:38:26 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:51.143 00:11:51.143 real 0m5.624s 00:11:51.143 user 0m5.487s 00:11:51.143 sys 0m0.498s 00:11:51.143 08:38:26 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.143 ************************************ 00:11:51.143 END TEST thread 00:11:51.143 ************************************ 00:11:51.143 08:38:26 thread -- common/autotest_common.sh@10 -- # set +x 00:11:51.143 08:38:26 -- common/autotest_common.sh@1142 -- # return 0 00:11:51.143 08:38:26 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:51.143 08:38:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:51.143 08:38:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.143 08:38:26 -- common/autotest_common.sh@10 -- # set +x 00:11:51.143 ************************************ 00:11:51.143 START TEST accel 00:11:51.143 ************************************ 00:11:51.143 08:38:26 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:51.402 * Looking for test storage... 00:11:51.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:51.402 08:38:26 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:51.402 08:38:26 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:51.402 08:38:26 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:51.402 08:38:26 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=114962 00:11:51.402 08:38:26 accel -- accel/accel.sh@63 -- # waitforlisten 114962 00:11:51.402 08:38:26 accel -- common/autotest_common.sh@829 -- # '[' -z 114962 ']' 00:11:51.402 08:38:26 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.402 08:38:26 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.402 08:38:26 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.402 08:38:26 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.402 08:38:26 accel -- common/autotest_common.sh@10 -- # set +x 00:11:51.402 08:38:26 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:51.402 08:38:26 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:51.402 08:38:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:51.402 08:38:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:51.402 08:38:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:51.402 08:38:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:51.402 08:38:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:51.402 08:38:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:51.402 08:38:26 accel -- accel/accel.sh@41 -- # jq -r . 00:11:51.402 [2024-07-12 08:38:26.422822] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:51.402 [2024-07-12 08:38:26.423072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114962 ] 00:11:51.402 [2024-07-12 08:38:26.587603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.660 [2024-07-12 08:38:26.841710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.643 08:38:27 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.643 08:38:27 accel -- common/autotest_common.sh@862 -- # return 0 00:11:52.643 08:38:27 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:52.643 08:38:27 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:52.643 08:38:27 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:52.643 08:38:27 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:11:52.643 08:38:27 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:52.643 08:38:27 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:52.643 08:38:27 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.643 08:38:27 accel -- common/autotest_common.sh@10 -- # set +x 00:11:52.643 08:38:27 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:52.643 08:38:27 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.643 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.643 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.643 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.643 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.643 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.643 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.643 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.643 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # IFS== 00:11:52.644 08:38:27 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:52.644 08:38:27 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:52.644 08:38:27 accel -- accel/accel.sh@75 -- # killprocess 114962 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@948 -- # '[' -z 114962 ']' 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@952 -- # kill -0 114962 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@953 -- # uname 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114962 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:52.644 killing process with pid 114962 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114962' 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@967 -- # kill 114962 00:11:52.644 08:38:27 accel -- common/autotest_common.sh@972 -- # wait 114962 00:11:55.173 08:38:30 accel -- accel/accel.sh@76 -- # trap - ERR 00:11:55.173 08:38:30 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:11:55.173 08:38:30 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.173 08:38:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.173 08:38:30 accel -- common/autotest_common.sh@10 -- # set +x 00:11:55.173 08:38:30 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:11:55.173 08:38:30 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:11:55.173 08:38:30 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.173 08:38:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:11:55.173 08:38:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:55.173 08:38:30 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:55.173 08:38:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:55.173 08:38:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.173 08:38:30 accel -- common/autotest_common.sh@10 -- # set +x 00:11:55.173 ************************************ 00:11:55.173 START TEST accel_missing_filename 00:11:55.173 ************************************ 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.173 08:38:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:11:55.173 08:38:30 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:11:55.173 [2024-07-12 08:38:30.344004] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:55.173 [2024-07-12 08:38:30.344204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115043 ] 00:11:55.431 [2024-07-12 08:38:30.508181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.689 [2024-07-12 08:38:30.749855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.948 [2024-07-12 08:38:30.975811] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:56.513 [2024-07-12 08:38:31.489884] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:56.770 A filename is required. 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:56.770 00:11:56.770 real 0m1.610s 00:11:56.770 user 0m1.321s 00:11:56.770 sys 0m0.242s 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.770 ************************************ 00:11:56.770 END TEST accel_missing_filename 00:11:56.770 ************************************ 00:11:56.770 08:38:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:11:56.770 08:38:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:56.770 08:38:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:56.770 08:38:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:56.770 08:38:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.770 08:38:31 accel -- common/autotest_common.sh@10 -- # set +x 00:11:56.770 ************************************ 00:11:56.770 START TEST accel_compress_verify 00:11:56.770 ************************************ 00:11:56.770 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:56.770 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:11:56.770 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:56.770 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:57.028 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.028 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:57.028 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.028 08:38:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:57.028 08:38:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:11:57.028 [2024-07-12 08:38:32.002424] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:57.028 [2024-07-12 08:38:32.002756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115095 ] 00:11:57.028 [2024-07-12 08:38:32.164149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.286 [2024-07-12 08:38:32.422754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.545 [2024-07-12 08:38:32.643706] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:58.150 [2024-07-12 08:38:33.174735] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:58.423 00:11:58.423 Compression does not support the verify option, aborting. 00:11:58.682 ************************************ 00:11:58.682 END TEST accel_compress_verify 00:11:58.682 ************************************ 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:58.682 00:11:58.682 real 0m1.662s 00:11:58.682 user 0m1.366s 00:11:58.682 sys 0m0.247s 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.682 08:38:33 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:58.682 08:38:33 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@10 -- # set +x 00:11:58.682 ************************************ 00:11:58.682 START TEST accel_wrong_workload 00:11:58.682 ************************************ 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:11:58.682 08:38:33 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:11:58.682 Unsupported workload type: foobar 00:11:58.682 [2024-07-12 08:38:33.718236] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:58.682 accel_perf options: 00:11:58.682 [-h help message] 00:11:58.682 [-q queue depth per core] 00:11:58.682 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:58.682 [-T number of threads per core 00:11:58.682 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:58.682 [-t time in seconds] 00:11:58.682 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:58.682 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:58.682 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:58.682 [-l for compress/decompress workloads, name of uncompressed input file 00:11:58.682 [-S for crc32c workload, use this seed value (default 0) 00:11:58.682 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:58.682 [-f for fill workload, use this BYTE value (default 255) 00:11:58.682 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:58.682 [-y verify result if this switch is on] 00:11:58.682 [-a tasks to allocate per core (default: same value as -q)] 00:11:58.682 Can be used to spread operations across a wider range of memory. 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:58.682 00:11:58.682 real 0m0.072s 00:11:58.682 user 0m0.093s 00:11:58.682 sys 0m0.040s 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.682 08:38:33 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:11:58.682 ************************************ 00:11:58.682 END TEST accel_wrong_workload 00:11:58.682 ************************************ 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:58.682 08:38:33 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.682 08:38:33 accel -- common/autotest_common.sh@10 -- # set +x 00:11:58.682 ************************************ 00:11:58.682 START TEST accel_negative_buffers 00:11:58.682 ************************************ 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:11:58.682 08:38:33 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:11:58.682 -x option must be non-negative. 00:11:58.682 [2024-07-12 08:38:33.838452] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:58.682 accel_perf options: 00:11:58.682 [-h help message] 00:11:58.682 [-q queue depth per core] 00:11:58.682 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:58.682 [-T number of threads per core 00:11:58.682 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:58.682 [-t time in seconds] 00:11:58.682 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:58.682 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:58.682 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:58.682 [-l for compress/decompress workloads, name of uncompressed input file 00:11:58.682 [-S for crc32c workload, use this seed value (default 0) 00:11:58.682 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:58.682 [-f for fill workload, use this BYTE value (default 255) 00:11:58.682 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:58.682 [-y verify result if this switch is on] 00:11:58.682 [-a tasks to allocate per core (default: same value as -q)] 00:11:58.682 Can be used to spread operations across a wider range of memory. 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:58.682 00:11:58.682 real 0m0.072s 00:11:58.682 user 0m0.098s 00:11:58.682 sys 0m0.029s 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.682 08:38:33 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:11:58.682 ************************************ 00:11:58.682 END TEST accel_negative_buffers 00:11:58.682 ************************************ 00:11:58.940 08:38:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:58.940 08:38:33 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:58.940 08:38:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:58.940 08:38:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.940 08:38:33 accel -- common/autotest_common.sh@10 -- # set +x 00:11:58.940 ************************************ 00:11:58.940 START TEST accel_crc32c 00:11:58.940 ************************************ 00:11:58.940 08:38:33 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:58.940 08:38:33 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:58.940 [2024-07-12 08:38:33.958439] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:11:58.940 [2024-07-12 08:38:33.958865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115203 ] 00:11:58.940 [2024-07-12 08:38:34.122561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.505 [2024-07-12 08:38:34.402340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:59.505 08:38:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.033 08:38:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:02.034 ************************************ 00:12:02.034 END TEST accel_crc32c 00:12:02.034 ************************************ 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:02.034 08:38:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:02.034 00:12:02.034 real 0m2.738s 00:12:02.034 user 0m2.434s 00:12:02.034 sys 0m0.227s 00:12:02.034 08:38:36 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.034 08:38:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:02.034 08:38:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:02.034 08:38:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:02.034 08:38:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:02.034 08:38:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.034 08:38:36 accel -- common/autotest_common.sh@10 -- # set +x 00:12:02.034 ************************************ 00:12:02.034 START TEST accel_crc32c_C2 00:12:02.034 ************************************ 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:02.034 08:38:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:02.034 [2024-07-12 08:38:36.749390] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:02.034 [2024-07-12 08:38:36.749754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115266 ] 00:12:02.034 [2024-07-12 08:38:36.922179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.034 [2024-07-12 08:38:37.199837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:02.292 08:38:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:04.830 ************************************ 00:12:04.830 END TEST accel_crc32c_C2 00:12:04.830 ************************************ 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:04.830 00:12:04.830 real 0m2.764s 00:12:04.830 user 0m2.467s 00:12:04.830 sys 0m0.229s 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.830 08:38:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:04.830 08:38:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:04.830 08:38:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:04.830 08:38:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:04.830 08:38:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.830 08:38:39 accel -- common/autotest_common.sh@10 -- # set +x 00:12:04.830 ************************************ 00:12:04.830 START TEST accel_copy 00:12:04.830 ************************************ 00:12:04.831 08:38:39 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:04.831 08:38:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:04.831 [2024-07-12 08:38:39.568802] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:04.831 [2024-07-12 08:38:39.569217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115324 ] 00:12:04.831 [2024-07-12 08:38:39.738901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.088 [2024-07-12 08:38:40.024290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.088 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:05.089 08:38:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:07.633 08:38:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.633 00:12:07.633 real 0m2.756s 00:12:07.633 user 0m2.420s 00:12:07.633 sys 0m0.244s 00:12:07.633 08:38:42 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.633 08:38:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:07.633 ************************************ 00:12:07.633 END TEST accel_copy 00:12:07.633 ************************************ 00:12:07.633 08:38:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:07.633 08:38:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:07.633 08:38:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:07.633 08:38:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.633 08:38:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:07.633 ************************************ 00:12:07.633 START TEST accel_fill 00:12:07.633 ************************************ 00:12:07.633 08:38:42 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:07.633 08:38:42 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:07.634 08:38:42 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:07.634 08:38:42 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.634 08:38:42 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.634 08:38:42 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:07.634 08:38:42 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:07.634 08:38:42 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:07.634 [2024-07-12 08:38:42.377178] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:07.634 [2024-07-12 08:38:42.377412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115381 ] 00:12:07.634 [2024-07-12 08:38:42.547660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.634 [2024-07-12 08:38:42.823993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:07.892 08:38:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:10.424 08:38:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:10.424 00:12:10.424 real 0m2.752s 00:12:10.424 user 0m2.449s 00:12:10.424 sys 0m0.234s 00:12:10.424 ************************************ 00:12:10.424 08:38:45 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.424 08:38:45 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:10.424 END TEST accel_fill 00:12:10.424 ************************************ 00:12:10.424 08:38:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:10.424 08:38:45 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:10.424 08:38:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:10.424 08:38:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.424 08:38:45 accel -- common/autotest_common.sh@10 -- # set +x 00:12:10.424 ************************************ 00:12:10.424 START TEST accel_copy_crc32c 00:12:10.424 ************************************ 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:10.424 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:10.424 [2024-07-12 08:38:45.179146] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:10.424 [2024-07-12 08:38:45.179349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115457 ] 00:12:10.424 [2024-07-12 08:38:45.349982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.684 [2024-07-12 08:38:45.624371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.684 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.943 08:38:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.846 00:12:12.846 real 0m2.748s 00:12:12.846 user 0m2.452s 00:12:12.846 sys 0m0.219s 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.846 08:38:47 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:12.846 ************************************ 00:12:12.846 END TEST accel_copy_crc32c 00:12:12.846 ************************************ 00:12:12.846 08:38:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:12.846 08:38:47 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:12.846 08:38:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:12.846 08:38:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.846 08:38:47 accel -- common/autotest_common.sh@10 -- # set +x 00:12:12.846 ************************************ 00:12:12.846 START TEST accel_copy_crc32c_C2 00:12:12.846 ************************************ 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:12.846 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:12.847 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:12.847 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:12.847 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.847 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.847 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:12.847 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:12.847 08:38:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:12.847 [2024-07-12 08:38:47.979404] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:12.847 [2024-07-12 08:38:47.979616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115520 ] 00:12:13.106 [2024-07-12 08:38:48.150547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.366 [2024-07-12 08:38:48.433351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:13.625 08:38:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:15.527 ************************************ 00:12:15.527 END TEST accel_copy_crc32c_C2 00:12:15.527 ************************************ 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:15.527 00:12:15.527 real 0m2.760s 00:12:15.527 user 0m2.454s 00:12:15.527 sys 0m0.226s 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.527 08:38:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 08:38:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:15.785 08:38:50 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:15.785 08:38:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:15.785 08:38:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.785 08:38:50 accel -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 ************************************ 00:12:15.785 START TEST accel_dualcast 00:12:15.785 ************************************ 00:12:15.785 08:38:50 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:15.785 08:38:50 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:15.785 [2024-07-12 08:38:50.789544] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:15.785 [2024-07-12 08:38:50.789953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115578 ] 00:12:15.785 [2024-07-12 08:38:50.962319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.369 [2024-07-12 08:38:51.237893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:16.370 08:38:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:18.317 ************************************ 00:12:18.317 END TEST accel_dualcast 00:12:18.317 ************************************ 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:18.317 08:38:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:18.317 00:12:18.317 real 0m2.738s 00:12:18.317 user 0m2.424s 00:12:18.317 sys 0m0.238s 00:12:18.317 08:38:53 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.317 08:38:53 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:18.575 08:38:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:18.575 08:38:53 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:18.575 08:38:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:18.575 08:38:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.575 08:38:53 accel -- common/autotest_common.sh@10 -- # set +x 00:12:18.575 ************************************ 00:12:18.575 START TEST accel_compare 00:12:18.575 ************************************ 00:12:18.575 08:38:53 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:18.575 08:38:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:18.575 [2024-07-12 08:38:53.577938] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:18.575 [2024-07-12 08:38:53.578326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115635 ] 00:12:18.575 [2024-07-12 08:38:53.746723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.833 [2024-07-12 08:38:54.020096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:19.091 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:19.092 08:38:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 ************************************ 00:12:21.622 END TEST accel_compare 00:12:21.622 ************************************ 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:21.622 08:38:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:21.622 00:12:21.622 real 0m2.701s 00:12:21.622 user 0m2.421s 00:12:21.622 sys 0m0.200s 00:12:21.622 08:38:56 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.622 08:38:56 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:21.622 08:38:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:21.622 08:38:56 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:21.622 08:38:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:21.622 08:38:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.622 08:38:56 accel -- common/autotest_common.sh@10 -- # set +x 00:12:21.622 ************************************ 00:12:21.622 START TEST accel_xor 00:12:21.622 ************************************ 00:12:21.622 08:38:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:21.622 08:38:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:21.623 08:38:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:21.623 08:38:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:21.623 [2024-07-12 08:38:56.330264] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:21.623 [2024-07-12 08:38:56.330652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115708 ] 00:12:21.623 [2024-07-12 08:38:56.499879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.623 [2024-07-12 08:38:56.773058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.881 08:38:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 ************************************ 00:12:24.408 END TEST accel_xor 00:12:24.408 ************************************ 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:24.408 00:12:24.408 real 0m2.754s 00:12:24.408 user 0m2.468s 00:12:24.408 sys 0m0.225s 00:12:24.408 08:38:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.408 08:38:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:24.408 08:38:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:24.408 08:38:59 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:24.408 08:38:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:24.408 08:38:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.408 08:38:59 accel -- common/autotest_common.sh@10 -- # set +x 00:12:24.408 ************************************ 00:12:24.408 START TEST accel_xor 00:12:24.408 ************************************ 00:12:24.408 08:38:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:24.408 08:38:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:24.408 [2024-07-12 08:38:59.140682] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:24.408 [2024-07-12 08:38:59.140914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115771 ] 00:12:24.408 [2024-07-12 08:38:59.311132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.408 [2024-07-12 08:38:59.589834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:24.666 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:24.667 08:38:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 ************************************ 00:12:27.197 END TEST accel_xor 00:12:27.197 ************************************ 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:27.197 08:39:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:27.197 00:12:27.197 real 0m2.764s 00:12:27.197 user 0m2.464s 00:12:27.197 sys 0m0.224s 00:12:27.197 08:39:01 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.197 08:39:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 08:39:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:27.197 08:39:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:27.197 08:39:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:27.197 08:39:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.197 08:39:01 accel -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 ************************************ 00:12:27.197 START TEST accel_dif_verify 00:12:27.197 ************************************ 00:12:27.197 08:39:01 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:27.197 08:39:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:27.197 [2024-07-12 08:39:01.959056] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:27.197 [2024-07-12 08:39:01.959410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115820 ] 00:12:27.197 [2024-07-12 08:39:02.131201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.455 [2024-07-12 08:39:02.407476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:27.713 08:39:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:29.615 ************************************ 00:12:29.615 END TEST accel_dif_verify 00:12:29.615 ************************************ 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:29.615 08:39:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:29.615 00:12:29.615 real 0m2.771s 00:12:29.615 user 0m2.447s 00:12:29.615 sys 0m0.247s 00:12:29.615 08:39:04 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.615 08:39:04 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:29.615 08:39:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:29.615 08:39:04 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:29.615 08:39:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:29.615 08:39:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.615 08:39:04 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.615 ************************************ 00:12:29.615 START TEST accel_dif_generate 00:12:29.616 ************************************ 00:12:29.616 08:39:04 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:29.616 08:39:04 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:29.616 [2024-07-12 08:39:04.784465] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:29.616 [2024-07-12 08:39:04.784688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115899 ] 00:12:29.873 [2024-07-12 08:39:04.981987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.130 [2024-07-12 08:39:05.269932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.388 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:30.389 08:39:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:32.285 08:39:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:32.285 00:12:32.285 real 0m2.736s 00:12:32.285 user 0m2.404s 00:12:32.285 sys 0m0.255s 00:12:32.285 08:39:07 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.285 08:39:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:32.285 ************************************ 00:12:32.285 END TEST accel_dif_generate 00:12:32.285 ************************************ 00:12:32.543 08:39:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:32.543 08:39:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:32.543 08:39:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:32.543 08:39:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.543 08:39:07 accel -- common/autotest_common.sh@10 -- # set +x 00:12:32.543 ************************************ 00:12:32.543 START TEST accel_dif_generate_copy 00:12:32.543 ************************************ 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:32.543 08:39:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:32.543 [2024-07-12 08:39:07.572165] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:32.543 [2024-07-12 08:39:07.572529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115959 ] 00:12:32.800 [2024-07-12 08:39:07.747357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.058 [2024-07-12 08:39:08.004559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.058 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.316 08:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.232 00:12:35.232 real 0m2.616s 00:12:35.232 user 0m2.310s 00:12:35.232 sys 0m0.249s 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.232 08:39:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 ************************************ 00:12:35.232 END TEST accel_dif_generate_copy 00:12:35.232 ************************************ 00:12:35.232 08:39:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:35.232 08:39:10 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:35.232 08:39:10 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:35.232 08:39:10 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:12:35.232 08:39:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.232 08:39:10 accel -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 ************************************ 00:12:35.232 START TEST accel_comp 00:12:35.232 ************************************ 00:12:35.232 08:39:10 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:35.232 08:39:10 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:35.232 [2024-07-12 08:39:10.236840] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:35.232 [2024-07-12 08:39:10.237061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116018 ] 00:12:35.232 [2024-07-12 08:39:10.411259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.489 [2024-07-12 08:39:10.671297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:35.747 08:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:37.642 08:39:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.642 00:12:37.642 real 0m2.546s 00:12:37.642 user 0m2.310s 00:12:37.642 sys 0m0.175s 00:12:37.642 08:39:12 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.642 08:39:12 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:37.642 ************************************ 00:12:37.642 END TEST accel_comp 00:12:37.642 ************************************ 00:12:37.642 08:39:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:37.642 08:39:12 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:37.642 08:39:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:12:37.642 08:39:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.642 08:39:12 accel -- common/autotest_common.sh@10 -- # set +x 00:12:37.643 ************************************ 00:12:37.643 START TEST accel_decomp 00:12:37.643 ************************************ 00:12:37.643 08:39:12 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:37.643 08:39:12 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:37.643 [2024-07-12 08:39:12.833428] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:37.643 [2024-07-12 08:39:12.833720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116074 ] 00:12:37.900 [2024-07-12 08:39:13.007663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.158 [2024-07-12 08:39:13.224083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:38.416 08:39:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:40.323 08:39:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:40.323 00:12:40.323 real 0m2.480s 00:12:40.323 user 0m2.203s 00:12:40.323 sys 0m0.216s 00:12:40.323 08:39:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:40.323 08:39:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:40.323 ************************************ 00:12:40.323 END TEST accel_decomp 00:12:40.323 ************************************ 00:12:40.323 08:39:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:40.323 08:39:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:40.323 08:39:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:40.323 08:39:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.323 08:39:15 accel -- common/autotest_common.sh@10 -- # set +x 00:12:40.323 ************************************ 00:12:40.323 START TEST accel_decomp_full 00:12:40.323 ************************************ 00:12:40.323 08:39:15 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:40.323 08:39:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:40.323 [2024-07-12 08:39:15.361235] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:40.323 [2024-07-12 08:39:15.361492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116152 ] 00:12:40.610 [2024-07-12 08:39:15.532789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.610 [2024-07-12 08:39:15.737419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:40.868 08:39:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:42.769 08:39:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:42.769 00:12:42.769 real 0m2.471s 00:12:42.769 user 0m2.243s 00:12:42.769 sys 0m0.173s 00:12:42.769 08:39:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.769 08:39:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:12:42.769 ************************************ 00:12:42.769 END TEST accel_decomp_full 00:12:42.769 ************************************ 00:12:42.769 08:39:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:42.769 08:39:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:42.769 08:39:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:42.769 08:39:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.769 08:39:17 accel -- common/autotest_common.sh@10 -- # set +x 00:12:42.769 ************************************ 00:12:42.769 START TEST accel_decomp_mcore 00:12:42.769 ************************************ 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:42.769 08:39:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:42.769 [2024-07-12 08:39:17.885450] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:42.769 [2024-07-12 08:39:17.885700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116203 ] 00:12:43.027 [2024-07-12 08:39:18.072849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.285 [2024-07-12 08:39:18.295483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.285 [2024-07-12 08:39:18.295617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.285 [2024-07-12 08:39:18.295863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.285 [2024-07-12 08:39:18.295873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:43.544 08:39:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.445 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.445 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.445 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.445 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:45.446 00:12:45.446 real 0m2.538s 00:12:45.446 user 0m7.381s 00:12:45.446 sys 0m0.204s 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.446 08:39:20 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 ************************************ 00:12:45.446 END TEST accel_decomp_mcore 00:12:45.446 ************************************ 00:12:45.446 08:39:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:45.446 08:39:20 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:45.446 08:39:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:45.446 08:39:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.446 08:39:20 accel -- common/autotest_common.sh@10 -- # set +x 00:12:45.446 ************************************ 00:12:45.446 START TEST accel_decomp_full_mcore 00:12:45.446 ************************************ 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:45.446 08:39:20 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:45.446 [2024-07-12 08:39:20.465024] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:45.446 [2024-07-12 08:39:20.465339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116264 ] 00:12:45.705 [2024-07-12 08:39:20.644257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.705 [2024-07-12 08:39:20.878147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.705 [2024-07-12 08:39:20.878293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.705 [2024-07-12 08:39:20.878560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.705 [2024-07-12 08:39:20.878570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.963 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:45.964 08:39:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:48.495 00:12:48.495 real 0m2.744s 00:12:48.495 user 0m7.957s 00:12:48.495 sys 0m0.188s 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.495 08:39:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:48.495 ************************************ 00:12:48.495 END TEST accel_decomp_full_mcore 00:12:48.495 ************************************ 00:12:48.495 08:39:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:48.495 08:39:23 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:48.495 08:39:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:48.495 08:39:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.495 08:39:23 accel -- common/autotest_common.sh@10 -- # set +x 00:12:48.495 ************************************ 00:12:48.495 START TEST accel_decomp_mthread 00:12:48.495 ************************************ 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:48.495 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:48.495 [2024-07-12 08:39:23.267859] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:48.495 [2024-07-12 08:39:23.268926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116323 ] 00:12:48.495 [2024-07-12 08:39:23.437638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.753 [2024-07-12 08:39:23.707103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.012 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:49.013 08:39:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.915 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.916 ************************************ 00:12:50.916 END TEST accel_decomp_mthread 00:12:50.916 ************************************ 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:50.916 00:12:50.916 real 0m2.738s 00:12:50.916 user 0m2.408s 00:12:50.916 sys 0m0.249s 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.916 08:39:25 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:50.916 08:39:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:50.916 08:39:25 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:50.916 08:39:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:50.916 08:39:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.916 08:39:25 accel -- common/autotest_common.sh@10 -- # set +x 00:12:50.916 ************************************ 00:12:50.916 START TEST accel_decomp_full_mthread 00:12:50.916 ************************************ 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:50.916 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:50.916 [2024-07-12 08:39:26.061794] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:50.916 [2024-07-12 08:39:26.062189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116405 ] 00:12:51.174 [2024-07-12 08:39:26.233626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.433 [2024-07-12 08:39:26.508229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.692 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:51.693 08:39:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:53.595 ************************************ 00:12:53.595 END TEST accel_decomp_full_mthread 00:12:53.595 ************************************ 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.595 00:12:53.595 real 0m2.768s 00:12:53.595 user 0m2.456s 00:12:53.595 sys 0m0.248s 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.595 08:39:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:53.853 08:39:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:53.853 08:39:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:12:53.853 08:39:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:53.853 08:39:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:12:53.853 08:39:28 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:53.853 08:39:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.853 08:39:28 accel -- common/autotest_common.sh@10 -- # set +x 00:12:53.853 08:39:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.853 08:39:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.853 08:39:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.853 08:39:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.853 08:39:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.853 08:39:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:53.853 08:39:28 accel -- accel/accel.sh@41 -- # jq -r . 00:12:53.853 ************************************ 00:12:53.853 START TEST accel_dif_functional_tests 00:12:53.853 ************************************ 00:12:53.853 08:39:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:53.853 [2024-07-12 08:39:28.916372] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:53.853 [2024-07-12 08:39:28.916794] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116468 ] 00:12:54.111 [2024-07-12 08:39:29.100549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.370 [2024-07-12 08:39:29.363189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.370 [2024-07-12 08:39:29.363285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.370 [2024-07-12 08:39:29.363284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.628 00:12:54.628 00:12:54.628 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.628 http://cunit.sourceforge.net/ 00:12:54.628 00:12:54.628 00:12:54.628 Suite: accel_dif 00:12:54.628 Test: verify: DIF generated, GUARD check ...passed 00:12:54.628 Test: verify: DIF generated, APPTAG check ...passed 00:12:54.628 Test: verify: DIF generated, REFTAG check ...passed 00:12:54.628 Test: verify: DIF not generated, GUARD check ...[2024-07-12 08:39:29.717134] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:54.628 passed 00:12:54.628 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 08:39:29.717621] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:54.628 passed 00:12:54.628 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 08:39:29.718058] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:54.628 passed 00:12:54.628 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:54.628 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 08:39:29.718764] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:54.628 passed 00:12:54.628 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:54.628 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:54.628 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:54.628 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 08:39:29.719725] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:54.628 passed 00:12:54.628 Test: verify copy: DIF generated, GUARD check ...passed 00:12:54.628 Test: verify copy: DIF generated, APPTAG check ...passed 00:12:54.628 Test: verify copy: DIF generated, REFTAG check ...passed 00:12:54.628 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 08:39:29.720908] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:54.628 passed 00:12:54.628 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 08:39:29.721364] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:54.628 passed 00:12:54.628 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 08:39:29.721695] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:54.628 passed 00:12:54.628 Test: generate copy: DIF generated, GUARD check ...passed 00:12:54.628 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:54.628 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:54.628 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:54.628 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:54.628 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:54.628 Test: generate copy: iovecs-len validate ...[2024-07-12 08:39:29.723490] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:54.628 passed 00:12:54.628 Test: generate copy: buffer alignment validate ...passed 00:12:54.628 00:12:54.628 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.628 suites 1 1 n/a 0 0 00:12:54.628 tests 26 26 26 0 0 00:12:54.628 asserts 115 115 115 0 n/a 00:12:54.628 00:12:54.628 Elapsed time = 0.023 seconds 00:12:56.028 ************************************ 00:12:56.028 END TEST accel_dif_functional_tests 00:12:56.028 ************************************ 00:12:56.028 00:12:56.028 real 0m2.148s 00:12:56.028 user 0m4.070s 00:12:56.028 sys 0m0.333s 00:12:56.028 08:39:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.028 08:39:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:12:56.028 08:39:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:56.028 00:12:56.028 real 1m4.758s 00:12:56.028 user 1m10.044s 00:12:56.028 sys 0m6.602s 00:12:56.028 ************************************ 00:12:56.028 END TEST accel 00:12:56.028 ************************************ 00:12:56.028 08:39:31 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.028 08:39:31 accel -- common/autotest_common.sh@10 -- # set +x 00:12:56.028 08:39:31 -- common/autotest_common.sh@1142 -- # return 0 00:12:56.028 08:39:31 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:56.028 08:39:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:56.028 08:39:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.028 08:39:31 -- common/autotest_common.sh@10 -- # set +x 00:12:56.028 ************************************ 00:12:56.028 START TEST accel_rpc 00:12:56.028 ************************************ 00:12:56.028 08:39:31 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:56.028 * Looking for test storage... 00:12:56.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:56.028 08:39:31 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:56.028 08:39:31 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=116560 00:12:56.028 08:39:31 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:56.028 08:39:31 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 116560 00:12:56.028 08:39:31 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 116560 ']' 00:12:56.028 08:39:31 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.028 08:39:31 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.028 08:39:31 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.028 08:39:31 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.028 08:39:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.286 [2024-07-12 08:39:31.220580] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:12:56.286 [2024-07-12 08:39:31.221075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116560 ] 00:12:56.286 [2024-07-12 08:39:31.383288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.544 [2024-07-12 08:39:31.633524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.110 08:39:32 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.110 08:39:32 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:57.110 08:39:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:57.110 08:39:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:57.110 08:39:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:57.110 08:39:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:57.110 08:39:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:57.110 08:39:32 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:57.110 08:39:32 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.110 08:39:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.110 ************************************ 00:12:57.110 START TEST accel_assign_opcode 00:12:57.110 ************************************ 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:57.110 [2024-07-12 08:39:32.210807] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:57.110 [2024-07-12 08:39:32.218676] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.110 08:39:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.048 software 00:12:58.048 ************************************ 00:12:58.048 END TEST accel_assign_opcode 00:12:58.048 ************************************ 00:12:58.048 00:12:58.048 real 0m0.922s 00:12:58.048 user 0m0.056s 00:12:58.048 sys 0m0.008s 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.048 08:39:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:58.048 08:39:33 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 116560 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 116560 ']' 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 116560 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116560 00:12:58.048 killing process with pid 116560 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116560' 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@967 -- # kill 116560 00:12:58.048 08:39:33 accel_rpc -- common/autotest_common.sh@972 -- # wait 116560 00:13:00.571 ************************************ 00:13:00.571 END TEST accel_rpc 00:13:00.571 ************************************ 00:13:00.571 00:13:00.571 real 0m4.537s 00:13:00.571 user 0m4.488s 00:13:00.571 sys 0m0.614s 00:13:00.571 08:39:35 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.571 08:39:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.571 08:39:35 -- common/autotest_common.sh@1142 -- # return 0 00:13:00.571 08:39:35 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:00.571 08:39:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:00.572 08:39:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.572 08:39:35 -- common/autotest_common.sh@10 -- # set +x 00:13:00.572 ************************************ 00:13:00.572 START TEST app_cmdline 00:13:00.572 ************************************ 00:13:00.572 08:39:35 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:00.572 * Looking for test storage... 00:13:00.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:00.572 08:39:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:00.572 08:39:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=116720 00:13:00.572 08:39:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 116720 00:13:00.572 08:39:35 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 116720 ']' 00:13:00.572 08:39:35 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.572 08:39:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:00.572 08:39:35 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.572 08:39:35 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.572 08:39:35 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.572 08:39:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:00.829 [2024-07-12 08:39:35.808145] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:13:00.829 [2024-07-12 08:39:35.808356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116720 ] 00:13:00.829 [2024-07-12 08:39:35.973941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.086 [2024-07-12 08:39:36.219304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.016 08:39:37 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.016 08:39:37 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:13:02.016 08:39:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:02.272 { 00:13:02.272 "version": "SPDK v24.09-pre git sha1 b3936a144", 00:13:02.272 "fields": { 00:13:02.272 "major": 24, 00:13:02.272 "minor": 9, 00:13:02.272 "patch": 0, 00:13:02.272 "suffix": "-pre", 00:13:02.272 "commit": "b3936a144" 00:13:02.272 } 00:13:02.272 } 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:02.272 08:39:37 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.272 08:39:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:02.272 08:39:37 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.272 08:39:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:02.273 08:39:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:02.273 08:39:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:02.273 08:39:37 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:02.532 request: 00:13:02.532 { 00:13:02.532 "method": "env_dpdk_get_mem_stats", 00:13:02.532 "req_id": 1 00:13:02.532 } 00:13:02.532 Got JSON-RPC error response 00:13:02.532 response: 00:13:02.532 { 00:13:02.532 "code": -32601, 00:13:02.532 "message": "Method not found" 00:13:02.532 } 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:02.532 08:39:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 116720 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 116720 ']' 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 116720 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116720 00:13:02.532 08:39:37 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.532 killing process with pid 116720 00:13:02.533 08:39:37 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.533 08:39:37 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116720' 00:13:02.533 08:39:37 app_cmdline -- common/autotest_common.sh@967 -- # kill 116720 00:13:02.533 08:39:37 app_cmdline -- common/autotest_common.sh@972 -- # wait 116720 00:13:05.060 00:13:05.060 real 0m4.380s 00:13:05.060 user 0m4.720s 00:13:05.060 sys 0m0.660s 00:13:05.060 08:39:40 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.060 08:39:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:05.060 ************************************ 00:13:05.060 END TEST app_cmdline 00:13:05.061 ************************************ 00:13:05.061 08:39:40 -- common/autotest_common.sh@1142 -- # return 0 00:13:05.061 08:39:40 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:05.061 08:39:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:05.061 08:39:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.061 08:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:05.061 ************************************ 00:13:05.061 START TEST version 00:13:05.061 ************************************ 00:13:05.061 08:39:40 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:05.061 * Looking for test storage... 00:13:05.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:05.061 08:39:40 version -- app/version.sh@17 -- # get_header_version major 00:13:05.061 08:39:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # cut -f2 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # tr -d '"' 00:13:05.061 08:39:40 version -- app/version.sh@17 -- # major=24 00:13:05.061 08:39:40 version -- app/version.sh@18 -- # get_header_version minor 00:13:05.061 08:39:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # cut -f2 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # tr -d '"' 00:13:05.061 08:39:40 version -- app/version.sh@18 -- # minor=9 00:13:05.061 08:39:40 version -- app/version.sh@19 -- # get_header_version patch 00:13:05.061 08:39:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # cut -f2 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # tr -d '"' 00:13:05.061 08:39:40 version -- app/version.sh@19 -- # patch=0 00:13:05.061 08:39:40 version -- app/version.sh@20 -- # get_header_version suffix 00:13:05.061 08:39:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # cut -f2 00:13:05.061 08:39:40 version -- app/version.sh@14 -- # tr -d '"' 00:13:05.061 08:39:40 version -- app/version.sh@20 -- # suffix=-pre 00:13:05.061 08:39:40 version -- app/version.sh@22 -- # version=24.9 00:13:05.061 08:39:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:05.061 08:39:40 version -- app/version.sh@28 -- # version=24.9rc0 00:13:05.061 08:39:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:05.061 08:39:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:05.061 08:39:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:13:05.061 08:39:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:13:05.061 ************************************ 00:13:05.061 END TEST version 00:13:05.061 ************************************ 00:13:05.061 00:13:05.061 real 0m0.140s 00:13:05.061 user 0m0.103s 00:13:05.061 sys 0m0.071s 00:13:05.061 08:39:40 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.061 08:39:40 version -- common/autotest_common.sh@10 -- # set +x 00:13:05.320 08:39:40 -- common/autotest_common.sh@1142 -- # return 0 00:13:05.320 08:39:40 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:13:05.320 08:39:40 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:05.320 08:39:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:05.320 08:39:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.320 08:39:40 -- common/autotest_common.sh@10 -- # set +x 00:13:05.320 ************************************ 00:13:05.320 START TEST blockdev_general 00:13:05.320 ************************************ 00:13:05.320 08:39:40 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:05.320 * Looking for test storage... 00:13:05.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:05.320 08:39:40 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=116903 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:05.320 08:39:40 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 116903 00:13:05.320 08:39:40 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 116903 ']' 00:13:05.320 08:39:40 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.320 08:39:40 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.320 08:39:40 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.320 08:39:40 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.320 08:39:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:05.320 [2024-07-12 08:39:40.439258] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:13:05.320 [2024-07-12 08:39:40.439910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116903 ] 00:13:05.578 [2024-07-12 08:39:40.610792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.837 [2024-07-12 08:39:40.884784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.403 08:39:41 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.403 08:39:41 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:13:06.403 08:39:41 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:06.403 08:39:41 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:13:06.403 08:39:41 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:13:06.403 08:39:41 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.403 08:39:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.336 [2024-07-12 08:39:42.309539] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:07.336 [2024-07-12 08:39:42.309935] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:07.336 00:13:07.336 [2024-07-12 08:39:42.317488] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:07.336 [2024-07-12 08:39:42.317692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:07.336 00:13:07.336 Malloc0 00:13:07.336 Malloc1 00:13:07.336 Malloc2 00:13:07.336 Malloc3 00:13:07.594 Malloc4 00:13:07.594 Malloc5 00:13:07.594 Malloc6 00:13:07.594 Malloc7 00:13:07.594 Malloc8 00:13:07.594 Malloc9 00:13:07.594 [2024-07-12 08:39:42.784208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:07.594 [2024-07-12 08:39:42.784590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.594 [2024-07-12 08:39:42.784738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:07.594 [2024-07-12 08:39:42.784904] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.852 [2024-07-12 08:39:42.787792] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.852 [2024-07-12 08:39:42.787950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:07.852 TestPT 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:07.852 5000+0 records in 00:13:07.852 5000+0 records out 00:13:07.852 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0265676 s, 385 MB/s 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 AIO0 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:07.852 08:39:42 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.852 08:39:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:08.109 08:39:43 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.109 08:39:43 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:08.109 08:39:43 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:08.110 08:39:43 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "da70be7b-7828-40b7-b2c5-91979cde2c68"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "da70be7b-7828-40b7-b2c5-91979cde2c68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "c5386f8b-7e7a-58be-8a9c-ff135069ee49"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c5386f8b-7e7a-58be-8a9c-ff135069ee49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "cd95d303-38fe-5a0e-923c-5a0b1f69272d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cd95d303-38fe-5a0e-923c-5a0b1f69272d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "6860fcc8-0ebf-5e45-8881-c181c108d6ee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6860fcc8-0ebf-5e45-8881-c181c108d6ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "561dbdc6-2564-58c3-b183-ac85ed4a739a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "561dbdc6-2564-58c3-b183-ac85ed4a739a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "becc74e0-3d23-5c06-b028-d2162b45131c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "becc74e0-3d23-5c06-b028-d2162b45131c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "b4c6c239-2545-5071-83d7-8bde3fcbf9f0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4c6c239-2545-5071-83d7-8bde3fcbf9f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "6a2d8e78-aa1e-5780-8aa8-27187bf407a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6a2d8e78-aa1e-5780-8aa8-27187bf407a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "67717f40-fe76-5357-9069-7fd127d23f92"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "67717f40-fe76-5357-9069-7fd127d23f92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "44abbedd-0807-5e0b-b76f-410c4224021d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "44abbedd-0807-5e0b-b76f-410c4224021d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e024d113-6932-56ec-b137-ed1a9269eb97"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e024d113-6932-56ec-b137-ed1a9269eb97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2db57938-14d6-570b-909b-6ea0dc7ca472"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2db57938-14d6-570b-909b-6ea0dc7ca472",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e7a810be-8d08-4dde-bab8-b59e89656364"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e7a810be-8d08-4dde-bab8-b59e89656364",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e7a810be-8d08-4dde-bab8-b59e89656364",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c5ab9a36-1b05-4436-9b51-50fbdc87f3f1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "722eef3e-7c2a-4e80-9ab1-8e7495648c10",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "600f40d6-d9ee-4464-b762-4775792ca09e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "600f40d6-d9ee-4464-b762-4775792ca09e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "600f40d6-d9ee-4464-b762-4775792ca09e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "12e5a2b8-6098-4d5f-8422-39be542facbe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "aa89c180-c168-4651-8339-4ae9837c49fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "52813fcf-19b5-4433-b2e7-893b4495ea03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "4c60c27c-2d34-48f8-b7cd-031d3f412621",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5f23716c-472e-4035-8718-2db4d2a693a3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5f23716c-472e-4035-8718-2db4d2a693a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:08.110 08:39:43 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:08.110 08:39:43 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:13:08.110 08:39:43 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:08.110 08:39:43 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 116903 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 116903 ']' 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 116903 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116903 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116903' 00:13:08.110 killing process with pid 116903 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@967 -- # kill 116903 00:13:08.110 08:39:43 blockdev_general -- common/autotest_common.sh@972 -- # wait 116903 00:13:11.389 08:39:46 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:11.389 08:39:46 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:11.389 08:39:46 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:11.389 08:39:46 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.389 08:39:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:11.389 ************************************ 00:13:11.389 START TEST bdev_hello_world 00:13:11.389 ************************************ 00:13:11.389 08:39:46 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:11.389 [2024-07-12 08:39:46.370622] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:13:11.389 [2024-07-12 08:39:46.371817] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117018 ] 00:13:11.389 [2024-07-12 08:39:46.543472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.648 [2024-07-12 08:39:46.745088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.214 [2024-07-12 08:39:47.133141] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:12.214 [2024-07-12 08:39:47.133544] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:12.214 [2024-07-12 08:39:47.141079] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:12.214 [2024-07-12 08:39:47.141341] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:12.214 [2024-07-12 08:39:47.149101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:12.214 [2024-07-12 08:39:47.149369] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:12.214 [2024-07-12 08:39:47.149520] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:12.214 [2024-07-12 08:39:47.343072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:12.214 [2024-07-12 08:39:47.343389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.214 [2024-07-12 08:39:47.343547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:12.214 [2024-07-12 08:39:47.343689] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.214 [2024-07-12 08:39:47.346367] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.214 [2024-07-12 08:39:47.346568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:12.472 [2024-07-12 08:39:47.657222] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:12.472 [2024-07-12 08:39:47.657651] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:12.472 [2024-07-12 08:39:47.657961] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:12.472 [2024-07-12 08:39:47.658257] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:12.472 [2024-07-12 08:39:47.658628] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:12.472 [2024-07-12 08:39:47.658867] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:12.472 [2024-07-12 08:39:47.659165] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:12.472 00:13:12.472 [2024-07-12 08:39:47.659427] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:15.002 ************************************ 00:13:15.002 END TEST bdev_hello_world 00:13:15.002 ************************************ 00:13:15.002 00:13:15.002 real 0m3.328s 00:13:15.002 user 0m2.813s 00:13:15.002 sys 0m0.360s 00:13:15.002 08:39:49 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.002 08:39:49 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:15.002 08:39:49 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:15.002 08:39:49 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:15.002 08:39:49 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.002 08:39:49 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.002 08:39:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:15.002 ************************************ 00:13:15.002 START TEST bdev_bounds 00:13:15.002 ************************************ 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=117080 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 117080' 00:13:15.002 Process bdevio pid: 117080 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 117080 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 117080 ']' 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.002 08:39:49 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:15.002 [2024-07-12 08:39:49.753544] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:13:15.002 [2024-07-12 08:39:49.753974] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117080 ] 00:13:15.002 [2024-07-12 08:39:49.946485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.260 [2024-07-12 08:39:50.197287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.260 [2024-07-12 08:39:50.197434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.260 [2024-07-12 08:39:50.197433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.828 [2024-07-12 08:39:50.786066] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:15.828 [2024-07-12 08:39:50.786482] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:15.828 [2024-07-12 08:39:50.794007] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:15.828 [2024-07-12 08:39:50.794346] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:15.828 [2024-07-12 08:39:50.802018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:15.828 [2024-07-12 08:39:50.802319] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:15.828 [2024-07-12 08:39:50.802458] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:16.086 [2024-07-12 08:39:51.093193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:16.086 [2024-07-12 08:39:51.093710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.086 [2024-07-12 08:39:51.093855] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:16.086 [2024-07-12 08:39:51.094287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.086 [2024-07-12 08:39:51.098124] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.086 [2024-07-12 08:39:51.098317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:16.345 08:39:51 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.345 08:39:51 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:13:16.345 08:39:51 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:16.603 I/O targets: 00:13:16.603 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:16.603 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:16.603 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:16.603 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:16.603 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:16.603 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:16.603 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:16.603 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:16.603 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:16.603 00:13:16.603 00:13:16.603 CUnit - A unit testing framework for C - Version 2.1-3 00:13:16.603 http://cunit.sourceforge.net/ 00:13:16.603 00:13:16.603 00:13:16.603 Suite: bdevio tests on: AIO0 00:13:16.603 Test: blockdev write read block ...passed 00:13:16.603 Test: blockdev write zeroes read block ...passed 00:13:16.603 Test: blockdev write zeroes read no split ...passed 00:13:16.603 Test: blockdev write zeroes read split ...passed 00:13:16.603 Test: blockdev write zeroes read split partial ...passed 00:13:16.603 Test: blockdev reset ...passed 00:13:16.603 Test: blockdev write read 8 blocks ...passed 00:13:16.603 Test: blockdev write read size > 128k ...passed 00:13:16.603 Test: blockdev write read invalid size ...passed 00:13:16.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.603 Test: blockdev write read max offset ...passed 00:13:16.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.603 Test: blockdev writev readv 8 blocks ...passed 00:13:16.603 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.603 Test: blockdev writev readv block ...passed 00:13:16.603 Test: blockdev writev readv size > 128k ...passed 00:13:16.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.603 Test: blockdev comparev and writev ...passed 00:13:16.603 Test: blockdev nvme passthru rw ...passed 00:13:16.603 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.603 Test: blockdev nvme admin passthru ...passed 00:13:16.603 Test: blockdev copy ...passed 00:13:16.603 Suite: bdevio tests on: raid1 00:13:16.603 Test: blockdev write read block ...passed 00:13:16.603 Test: blockdev write zeroes read block ...passed 00:13:16.603 Test: blockdev write zeroes read no split ...passed 00:13:16.603 Test: blockdev write zeroes read split ...passed 00:13:16.603 Test: blockdev write zeroes read split partial ...passed 00:13:16.603 Test: blockdev reset ...passed 00:13:16.603 Test: blockdev write read 8 blocks ...passed 00:13:16.603 Test: blockdev write read size > 128k ...passed 00:13:16.603 Test: blockdev write read invalid size ...passed 00:13:16.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.603 Test: blockdev write read max offset ...passed 00:13:16.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.603 Test: blockdev writev readv 8 blocks ...passed 00:13:16.603 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.603 Test: blockdev writev readv block ...passed 00:13:16.603 Test: blockdev writev readv size > 128k ...passed 00:13:16.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.603 Test: blockdev comparev and writev ...passed 00:13:16.603 Test: blockdev nvme passthru rw ...passed 00:13:16.603 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.603 Test: blockdev nvme admin passthru ...passed 00:13:16.603 Test: blockdev copy ...passed 00:13:16.603 Suite: bdevio tests on: concat0 00:13:16.603 Test: blockdev write read block ...passed 00:13:16.603 Test: blockdev write zeroes read block ...passed 00:13:16.603 Test: blockdev write zeroes read no split ...passed 00:13:16.603 Test: blockdev write zeroes read split ...passed 00:13:16.603 Test: blockdev write zeroes read split partial ...passed 00:13:16.603 Test: blockdev reset ...passed 00:13:16.603 Test: blockdev write read 8 blocks ...passed 00:13:16.603 Test: blockdev write read size > 128k ...passed 00:13:16.603 Test: blockdev write read invalid size ...passed 00:13:16.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.603 Test: blockdev write read max offset ...passed 00:13:16.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.603 Test: blockdev writev readv 8 blocks ...passed 00:13:16.603 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.603 Test: blockdev writev readv block ...passed 00:13:16.603 Test: blockdev writev readv size > 128k ...passed 00:13:16.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.861 Test: blockdev comparev and writev ...passed 00:13:16.861 Test: blockdev nvme passthru rw ...passed 00:13:16.861 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.861 Test: blockdev nvme admin passthru ...passed 00:13:16.861 Test: blockdev copy ...passed 00:13:16.861 Suite: bdevio tests on: raid0 00:13:16.861 Test: blockdev write read block ...passed 00:13:16.861 Test: blockdev write zeroes read block ...passed 00:13:16.861 Test: blockdev write zeroes read no split ...passed 00:13:16.861 Test: blockdev write zeroes read split ...passed 00:13:16.861 Test: blockdev write zeroes read split partial ...passed 00:13:16.861 Test: blockdev reset ...passed 00:13:16.861 Test: blockdev write read 8 blocks ...passed 00:13:16.861 Test: blockdev write read size > 128k ...passed 00:13:16.861 Test: blockdev write read invalid size ...passed 00:13:16.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.861 Test: blockdev write read max offset ...passed 00:13:16.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.861 Test: blockdev writev readv 8 blocks ...passed 00:13:16.861 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.861 Test: blockdev writev readv block ...passed 00:13:16.861 Test: blockdev writev readv size > 128k ...passed 00:13:16.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.861 Test: blockdev comparev and writev ...passed 00:13:16.861 Test: blockdev nvme passthru rw ...passed 00:13:16.861 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.861 Test: blockdev nvme admin passthru ...passed 00:13:16.861 Test: blockdev copy ...passed 00:13:16.861 Suite: bdevio tests on: TestPT 00:13:16.861 Test: blockdev write read block ...passed 00:13:16.861 Test: blockdev write zeroes read block ...passed 00:13:16.861 Test: blockdev write zeroes read no split ...passed 00:13:16.861 Test: blockdev write zeroes read split ...passed 00:13:16.861 Test: blockdev write zeroes read split partial ...passed 00:13:16.861 Test: blockdev reset ...passed 00:13:16.861 Test: blockdev write read 8 blocks ...passed 00:13:16.861 Test: blockdev write read size > 128k ...passed 00:13:16.861 Test: blockdev write read invalid size ...passed 00:13:16.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.861 Test: blockdev write read max offset ...passed 00:13:16.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.861 Test: blockdev writev readv 8 blocks ...passed 00:13:16.861 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.861 Test: blockdev writev readv block ...passed 00:13:16.861 Test: blockdev writev readv size > 128k ...passed 00:13:16.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.861 Test: blockdev comparev and writev ...passed 00:13:16.861 Test: blockdev nvme passthru rw ...passed 00:13:16.861 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.861 Test: blockdev nvme admin passthru ...passed 00:13:16.861 Test: blockdev copy ...passed 00:13:16.861 Suite: bdevio tests on: Malloc2p7 00:13:16.861 Test: blockdev write read block ...passed 00:13:16.861 Test: blockdev write zeroes read block ...passed 00:13:16.861 Test: blockdev write zeroes read no split ...passed 00:13:16.861 Test: blockdev write zeroes read split ...passed 00:13:16.861 Test: blockdev write zeroes read split partial ...passed 00:13:16.861 Test: blockdev reset ...passed 00:13:16.861 Test: blockdev write read 8 blocks ...passed 00:13:16.861 Test: blockdev write read size > 128k ...passed 00:13:16.861 Test: blockdev write read invalid size ...passed 00:13:16.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:16.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:16.861 Test: blockdev write read max offset ...passed 00:13:16.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:16.861 Test: blockdev writev readv 8 blocks ...passed 00:13:16.861 Test: blockdev writev readv 30 x 1block ...passed 00:13:16.861 Test: blockdev writev readv block ...passed 00:13:16.861 Test: blockdev writev readv size > 128k ...passed 00:13:16.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:16.861 Test: blockdev comparev and writev ...passed 00:13:16.861 Test: blockdev nvme passthru rw ...passed 00:13:16.861 Test: blockdev nvme passthru vendor specific ...passed 00:13:16.861 Test: blockdev nvme admin passthru ...passed 00:13:16.861 Test: blockdev copy ...passed 00:13:16.861 Suite: bdevio tests on: Malloc2p6 00:13:16.861 Test: blockdev write read block ...passed 00:13:16.861 Test: blockdev write zeroes read block ...passed 00:13:16.861 Test: blockdev write zeroes read no split ...passed 00:13:16.861 Test: blockdev write zeroes read split ...passed 00:13:16.861 Test: blockdev write zeroes read split partial ...passed 00:13:17.121 Test: blockdev reset ...passed 00:13:17.121 Test: blockdev write read 8 blocks ...passed 00:13:17.121 Test: blockdev write read size > 128k ...passed 00:13:17.121 Test: blockdev write read invalid size ...passed 00:13:17.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.121 Test: blockdev write read max offset ...passed 00:13:17.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.122 Test: blockdev writev readv 8 blocks ...passed 00:13:17.122 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.122 Test: blockdev writev readv block ...passed 00:13:17.122 Test: blockdev writev readv size > 128k ...passed 00:13:17.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.122 Test: blockdev comparev and writev ...passed 00:13:17.122 Test: blockdev nvme passthru rw ...passed 00:13:17.122 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.122 Test: blockdev nvme admin passthru ...passed 00:13:17.122 Test: blockdev copy ...passed 00:13:17.122 Suite: bdevio tests on: Malloc2p5 00:13:17.122 Test: blockdev write read block ...passed 00:13:17.122 Test: blockdev write zeroes read block ...passed 00:13:17.122 Test: blockdev write zeroes read no split ...passed 00:13:17.122 Test: blockdev write zeroes read split ...passed 00:13:17.122 Test: blockdev write zeroes read split partial ...passed 00:13:17.122 Test: blockdev reset ...passed 00:13:17.122 Test: blockdev write read 8 blocks ...passed 00:13:17.122 Test: blockdev write read size > 128k ...passed 00:13:17.122 Test: blockdev write read invalid size ...passed 00:13:17.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.122 Test: blockdev write read max offset ...passed 00:13:17.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.122 Test: blockdev writev readv 8 blocks ...passed 00:13:17.122 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.122 Test: blockdev writev readv block ...passed 00:13:17.122 Test: blockdev writev readv size > 128k ...passed 00:13:17.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.122 Test: blockdev comparev and writev ...passed 00:13:17.122 Test: blockdev nvme passthru rw ...passed 00:13:17.122 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.122 Test: blockdev nvme admin passthru ...passed 00:13:17.122 Test: blockdev copy ...passed 00:13:17.122 Suite: bdevio tests on: Malloc2p4 00:13:17.122 Test: blockdev write read block ...passed 00:13:17.122 Test: blockdev write zeroes read block ...passed 00:13:17.122 Test: blockdev write zeroes read no split ...passed 00:13:17.122 Test: blockdev write zeroes read split ...passed 00:13:17.122 Test: blockdev write zeroes read split partial ...passed 00:13:17.122 Test: blockdev reset ...passed 00:13:17.122 Test: blockdev write read 8 blocks ...passed 00:13:17.122 Test: blockdev write read size > 128k ...passed 00:13:17.122 Test: blockdev write read invalid size ...passed 00:13:17.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.122 Test: blockdev write read max offset ...passed 00:13:17.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.122 Test: blockdev writev readv 8 blocks ...passed 00:13:17.122 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.122 Test: blockdev writev readv block ...passed 00:13:17.122 Test: blockdev writev readv size > 128k ...passed 00:13:17.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.122 Test: blockdev comparev and writev ...passed 00:13:17.122 Test: blockdev nvme passthru rw ...passed 00:13:17.122 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.122 Test: blockdev nvme admin passthru ...passed 00:13:17.122 Test: blockdev copy ...passed 00:13:17.122 Suite: bdevio tests on: Malloc2p3 00:13:17.122 Test: blockdev write read block ...passed 00:13:17.122 Test: blockdev write zeroes read block ...passed 00:13:17.122 Test: blockdev write zeroes read no split ...passed 00:13:17.122 Test: blockdev write zeroes read split ...passed 00:13:17.122 Test: blockdev write zeroes read split partial ...passed 00:13:17.122 Test: blockdev reset ...passed 00:13:17.122 Test: blockdev write read 8 blocks ...passed 00:13:17.122 Test: blockdev write read size > 128k ...passed 00:13:17.122 Test: blockdev write read invalid size ...passed 00:13:17.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.122 Test: blockdev write read max offset ...passed 00:13:17.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.122 Test: blockdev writev readv 8 blocks ...passed 00:13:17.122 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.122 Test: blockdev writev readv block ...passed 00:13:17.122 Test: blockdev writev readv size > 128k ...passed 00:13:17.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.122 Test: blockdev comparev and writev ...passed 00:13:17.122 Test: blockdev nvme passthru rw ...passed 00:13:17.122 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.122 Test: blockdev nvme admin passthru ...passed 00:13:17.122 Test: blockdev copy ...passed 00:13:17.122 Suite: bdevio tests on: Malloc2p2 00:13:17.122 Test: blockdev write read block ...passed 00:13:17.122 Test: blockdev write zeroes read block ...passed 00:13:17.122 Test: blockdev write zeroes read no split ...passed 00:13:17.122 Test: blockdev write zeroes read split ...passed 00:13:17.122 Test: blockdev write zeroes read split partial ...passed 00:13:17.122 Test: blockdev reset ...passed 00:13:17.122 Test: blockdev write read 8 blocks ...passed 00:13:17.122 Test: blockdev write read size > 128k ...passed 00:13:17.122 Test: blockdev write read invalid size ...passed 00:13:17.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.122 Test: blockdev write read max offset ...passed 00:13:17.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.122 Test: blockdev writev readv 8 blocks ...passed 00:13:17.122 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.122 Test: blockdev writev readv block ...passed 00:13:17.122 Test: blockdev writev readv size > 128k ...passed 00:13:17.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.122 Test: blockdev comparev and writev ...passed 00:13:17.122 Test: blockdev nvme passthru rw ...passed 00:13:17.122 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.122 Test: blockdev nvme admin passthru ...passed 00:13:17.122 Test: blockdev copy ...passed 00:13:17.122 Suite: bdevio tests on: Malloc2p1 00:13:17.122 Test: blockdev write read block ...passed 00:13:17.122 Test: blockdev write zeroes read block ...passed 00:13:17.122 Test: blockdev write zeroes read no split ...passed 00:13:17.122 Test: blockdev write zeroes read split ...passed 00:13:17.381 Test: blockdev write zeroes read split partial ...passed 00:13:17.381 Test: blockdev reset ...passed 00:13:17.381 Test: blockdev write read 8 blocks ...passed 00:13:17.381 Test: blockdev write read size > 128k ...passed 00:13:17.381 Test: blockdev write read invalid size ...passed 00:13:17.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.381 Test: blockdev write read max offset ...passed 00:13:17.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.381 Test: blockdev writev readv 8 blocks ...passed 00:13:17.381 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.381 Test: blockdev writev readv block ...passed 00:13:17.381 Test: blockdev writev readv size > 128k ...passed 00:13:17.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.381 Test: blockdev comparev and writev ...passed 00:13:17.381 Test: blockdev nvme passthru rw ...passed 00:13:17.381 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.382 Test: blockdev nvme admin passthru ...passed 00:13:17.382 Test: blockdev copy ...passed 00:13:17.382 Suite: bdevio tests on: Malloc2p0 00:13:17.382 Test: blockdev write read block ...passed 00:13:17.382 Test: blockdev write zeroes read block ...passed 00:13:17.382 Test: blockdev write zeroes read no split ...passed 00:13:17.382 Test: blockdev write zeroes read split ...passed 00:13:17.382 Test: blockdev write zeroes read split partial ...passed 00:13:17.382 Test: blockdev reset ...passed 00:13:17.382 Test: blockdev write read 8 blocks ...passed 00:13:17.382 Test: blockdev write read size > 128k ...passed 00:13:17.382 Test: blockdev write read invalid size ...passed 00:13:17.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.382 Test: blockdev write read max offset ...passed 00:13:17.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.382 Test: blockdev writev readv 8 blocks ...passed 00:13:17.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.382 Test: blockdev writev readv block ...passed 00:13:17.382 Test: blockdev writev readv size > 128k ...passed 00:13:17.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.382 Test: blockdev comparev and writev ...passed 00:13:17.382 Test: blockdev nvme passthru rw ...passed 00:13:17.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.382 Test: blockdev nvme admin passthru ...passed 00:13:17.382 Test: blockdev copy ...passed 00:13:17.382 Suite: bdevio tests on: Malloc1p1 00:13:17.382 Test: blockdev write read block ...passed 00:13:17.382 Test: blockdev write zeroes read block ...passed 00:13:17.382 Test: blockdev write zeroes read no split ...passed 00:13:17.382 Test: blockdev write zeroes read split ...passed 00:13:17.382 Test: blockdev write zeroes read split partial ...passed 00:13:17.382 Test: blockdev reset ...passed 00:13:17.382 Test: blockdev write read 8 blocks ...passed 00:13:17.382 Test: blockdev write read size > 128k ...passed 00:13:17.382 Test: blockdev write read invalid size ...passed 00:13:17.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.382 Test: blockdev write read max offset ...passed 00:13:17.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.382 Test: blockdev writev readv 8 blocks ...passed 00:13:17.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.382 Test: blockdev writev readv block ...passed 00:13:17.382 Test: blockdev writev readv size > 128k ...passed 00:13:17.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.382 Test: blockdev comparev and writev ...passed 00:13:17.382 Test: blockdev nvme passthru rw ...passed 00:13:17.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.382 Test: blockdev nvme admin passthru ...passed 00:13:17.382 Test: blockdev copy ...passed 00:13:17.382 Suite: bdevio tests on: Malloc1p0 00:13:17.382 Test: blockdev write read block ...passed 00:13:17.382 Test: blockdev write zeroes read block ...passed 00:13:17.382 Test: blockdev write zeroes read no split ...passed 00:13:17.382 Test: blockdev write zeroes read split ...passed 00:13:17.382 Test: blockdev write zeroes read split partial ...passed 00:13:17.382 Test: blockdev reset ...passed 00:13:17.382 Test: blockdev write read 8 blocks ...passed 00:13:17.382 Test: blockdev write read size > 128k ...passed 00:13:17.382 Test: blockdev write read invalid size ...passed 00:13:17.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.382 Test: blockdev write read max offset ...passed 00:13:17.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.382 Test: blockdev writev readv 8 blocks ...passed 00:13:17.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.382 Test: blockdev writev readv block ...passed 00:13:17.382 Test: blockdev writev readv size > 128k ...passed 00:13:17.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.382 Test: blockdev comparev and writev ...passed 00:13:17.382 Test: blockdev nvme passthru rw ...passed 00:13:17.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.382 Test: blockdev nvme admin passthru ...passed 00:13:17.382 Test: blockdev copy ...passed 00:13:17.382 Suite: bdevio tests on: Malloc0 00:13:17.382 Test: blockdev write read block ...passed 00:13:17.382 Test: blockdev write zeroes read block ...passed 00:13:17.382 Test: blockdev write zeroes read no split ...passed 00:13:17.382 Test: blockdev write zeroes read split ...passed 00:13:17.382 Test: blockdev write zeroes read split partial ...passed 00:13:17.382 Test: blockdev reset ...passed 00:13:17.382 Test: blockdev write read 8 blocks ...passed 00:13:17.382 Test: blockdev write read size > 128k ...passed 00:13:17.382 Test: blockdev write read invalid size ...passed 00:13:17.382 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.382 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.382 Test: blockdev write read max offset ...passed 00:13:17.382 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.382 Test: blockdev writev readv 8 blocks ...passed 00:13:17.382 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.382 Test: blockdev writev readv block ...passed 00:13:17.382 Test: blockdev writev readv size > 128k ...passed 00:13:17.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.382 Test: blockdev comparev and writev ...passed 00:13:17.382 Test: blockdev nvme passthru rw ...passed 00:13:17.382 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.382 Test: blockdev nvme admin passthru ...passed 00:13:17.382 Test: blockdev copy ...passed 00:13:17.382 00:13:17.382 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.382 suites 16 16 n/a 0 0 00:13:17.382 tests 368 368 368 0 0 00:13:17.382 asserts 2224 2224 2224 0 n/a 00:13:17.382 00:13:17.382 Elapsed time = 2.728 seconds 00:13:17.641 0 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 117080 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 117080 ']' 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 117080 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117080 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:17.641 killing process with pid 117080 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117080' 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 117080 00:13:17.641 08:39:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 117080 00:13:19.564 ************************************ 00:13:19.564 END TEST bdev_bounds 00:13:19.564 ************************************ 00:13:19.564 08:39:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:19.564 00:13:19.564 real 0m4.958s 00:13:19.564 user 0m12.301s 00:13:19.564 sys 0m0.860s 00:13:19.564 08:39:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.564 08:39:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:19.564 08:39:54 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:19.564 08:39:54 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:19.564 08:39:54 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:19.564 08:39:54 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.564 08:39:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.564 ************************************ 00:13:19.564 START TEST bdev_nbd 00:13:19.564 ************************************ 00:13:19.564 08:39:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:19.564 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:13:19.564 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:19.564 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=117174 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 117174 /var/tmp/spdk-nbd.sock 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 117174 ']' 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:19.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.565 08:39:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:19.823 [2024-07-12 08:39:54.760653] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:13:19.823 [2024-07-12 08:39:54.762960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.823 [2024-07-12 08:39:54.923587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.082 [2024-07-12 08:39:55.146913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.649 [2024-07-12 08:39:55.538685] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:20.649 [2024-07-12 08:39:55.538967] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:20.649 [2024-07-12 08:39:55.546648] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:20.649 [2024-07-12 08:39:55.546813] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:20.649 [2024-07-12 08:39:55.554677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:20.649 [2024-07-12 08:39:55.554853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:20.649 [2024-07-12 08:39:55.554994] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:20.649 [2024-07-12 08:39:55.755991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:20.649 [2024-07-12 08:39:55.756362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.649 [2024-07-12 08:39:55.756536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:20.649 [2024-07-12 08:39:55.756661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.649 [2024-07-12 08:39:55.759352] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.649 [2024-07-12 08:39:55.759543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:21.216 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.474 1+0 records in 00:13:21.474 1+0 records out 00:13:21.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032236 s, 12.7 MB/s 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:21.474 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.733 1+0 records in 00:13:21.733 1+0 records out 00:13:21.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385689 s, 10.6 MB/s 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:21.733 08:39:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.991 1+0 records in 00:13:21.991 1+0 records out 00:13:21.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710686 s, 5.8 MB/s 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:21.991 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.250 1+0 records in 00:13:22.250 1+0 records out 00:13:22.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393729 s, 10.4 MB/s 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:22.250 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.508 1+0 records in 00:13:22.508 1+0 records out 00:13:22.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313352 s, 13.1 MB/s 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:22.508 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:22.767 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.025 1+0 records in 00:13:23.025 1+0 records out 00:13:23.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439461 s, 9.3 MB/s 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:23.025 08:39:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.284 1+0 records in 00:13:23.284 1+0 records out 00:13:23.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624629 s, 6.6 MB/s 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:23.284 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.543 1+0 records in 00:13:23.543 1+0 records out 00:13:23.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049944 s, 8.2 MB/s 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:23.543 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.801 1+0 records in 00:13:23.801 1+0 records out 00:13:23.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751202 s, 5.5 MB/s 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:23.801 08:39:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.368 1+0 records in 00:13:24.368 1+0 records out 00:13:24.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057323 s, 7.1 MB/s 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:24.368 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.626 1+0 records in 00:13:24.626 1+0 records out 00:13:24.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467101 s, 8.8 MB/s 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:24.626 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.883 1+0 records in 00:13:24.883 1+0 records out 00:13:24.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600825 s, 6.8 MB/s 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.883 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:24.884 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.884 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:24.884 08:39:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:24.884 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.884 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:24.884 08:39:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.142 1+0 records in 00:13:25.142 1+0 records out 00:13:25.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103411 s, 4.0 MB/s 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:25.142 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.400 1+0 records in 00:13:25.400 1+0 records out 00:13:25.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969549 s, 4.2 MB/s 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:25.400 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.659 1+0 records in 00:13:25.659 1+0 records out 00:13:25.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807052 s, 5.1 MB/s 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:25.659 08:40:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:25.918 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.918 1+0 records in 00:13:25.918 1+0 records out 00:13:25.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126791 s, 3.2 MB/s 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:26.176 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd0", 00:13:26.435 "bdev_name": "Malloc0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd1", 00:13:26.435 "bdev_name": "Malloc1p0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd2", 00:13:26.435 "bdev_name": "Malloc1p1" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd3", 00:13:26.435 "bdev_name": "Malloc2p0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd4", 00:13:26.435 "bdev_name": "Malloc2p1" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd5", 00:13:26.435 "bdev_name": "Malloc2p2" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd6", 00:13:26.435 "bdev_name": "Malloc2p3" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd7", 00:13:26.435 "bdev_name": "Malloc2p4" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd8", 00:13:26.435 "bdev_name": "Malloc2p5" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd9", 00:13:26.435 "bdev_name": "Malloc2p6" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd10", 00:13:26.435 "bdev_name": "Malloc2p7" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd11", 00:13:26.435 "bdev_name": "TestPT" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd12", 00:13:26.435 "bdev_name": "raid0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd13", 00:13:26.435 "bdev_name": "concat0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd14", 00:13:26.435 "bdev_name": "raid1" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd15", 00:13:26.435 "bdev_name": "AIO0" 00:13:26.435 } 00:13:26.435 ]' 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd0", 00:13:26.435 "bdev_name": "Malloc0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd1", 00:13:26.435 "bdev_name": "Malloc1p0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd2", 00:13:26.435 "bdev_name": "Malloc1p1" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd3", 00:13:26.435 "bdev_name": "Malloc2p0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd4", 00:13:26.435 "bdev_name": "Malloc2p1" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd5", 00:13:26.435 "bdev_name": "Malloc2p2" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd6", 00:13:26.435 "bdev_name": "Malloc2p3" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd7", 00:13:26.435 "bdev_name": "Malloc2p4" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd8", 00:13:26.435 "bdev_name": "Malloc2p5" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd9", 00:13:26.435 "bdev_name": "Malloc2p6" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd10", 00:13:26.435 "bdev_name": "Malloc2p7" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd11", 00:13:26.435 "bdev_name": "TestPT" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd12", 00:13:26.435 "bdev_name": "raid0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd13", 00:13:26.435 "bdev_name": "concat0" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd14", 00:13:26.435 "bdev_name": "raid1" 00:13:26.435 }, 00:13:26.435 { 00:13:26.435 "nbd_device": "/dev/nbd15", 00:13:26.435 "bdev_name": "AIO0" 00:13:26.435 } 00:13:26.435 ]' 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.435 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.691 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.691 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.691 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.691 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.691 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.692 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.692 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.692 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.692 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.692 08:40:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.949 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.206 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.464 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.721 08:40:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.979 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.980 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.544 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.801 08:40:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.367 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.368 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.368 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.625 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:29.883 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:29.883 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:29.883 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:29.883 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.883 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.883 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:29.883 08:40:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:29.883 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:29.883 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.883 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:29.883 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.883 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.883 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.883 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.142 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.400 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.658 08:40:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.916 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:31.175 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:31.433 /dev/nbd0 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.433 1+0 records in 00:13:31.433 1+0 records out 00:13:31.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316559 s, 12.9 MB/s 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:31.433 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:13:31.691 /dev/nbd1 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.976 1+0 records in 00:13:31.976 1+0 records out 00:13:31.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496162 s, 8.3 MB/s 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:31.976 08:40:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:13:31.976 /dev/nbd10 00:13:31.976 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.240 1+0 records in 00:13:32.240 1+0 records out 00:13:32.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453347 s, 9.0 MB/s 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:13:32.240 /dev/nbd11 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.240 1+0 records in 00:13:32.240 1+0 records out 00:13:32.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041803 s, 9.8 MB/s 00:13:32.240 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:13:32.499 /dev/nbd12 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.499 1+0 records in 00:13:32.499 1+0 records out 00:13:32.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305727 s, 13.4 MB/s 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.499 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:32.757 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:32.757 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.757 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:32.757 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:13:33.015 /dev/nbd13 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.015 1+0 records in 00:13:33.015 1+0 records out 00:13:33.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396656 s, 10.3 MB/s 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.015 08:40:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:13:33.274 /dev/nbd14 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.274 1+0 records in 00:13:33.274 1+0 records out 00:13:33.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445083 s, 9.2 MB/s 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.274 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:13:33.533 /dev/nbd15 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.533 1+0 records in 00:13:33.533 1+0 records out 00:13:33.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430333 s, 9.5 MB/s 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.533 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:13:33.791 /dev/nbd2 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.791 1+0 records in 00:13:33.791 1+0 records out 00:13:33.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056462 s, 7.3 MB/s 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:33.791 08:40:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:13:34.049 /dev/nbd3 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.049 1+0 records in 00:13:34.049 1+0 records out 00:13:34.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373114 s, 11.0 MB/s 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:34.049 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:13:34.615 /dev/nbd4 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.615 1+0 records in 00:13:34.615 1+0 records out 00:13:34.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611954 s, 6.7 MB/s 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:34.615 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:13:34.874 /dev/nbd5 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.874 1+0 records in 00:13:34.874 1+0 records out 00:13:34.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465437 s, 8.8 MB/s 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:34.874 08:40:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:13:35.134 /dev/nbd6 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.134 1+0 records in 00:13:35.134 1+0 records out 00:13:35.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797285 s, 5.1 MB/s 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.134 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:35.394 /dev/nbd7 00:13:35.394 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:35.394 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:35.394 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.395 1+0 records in 00:13:35.395 1+0 records out 00:13:35.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127797 s, 3.2 MB/s 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.395 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:13:35.658 /dev/nbd8 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.658 1+0 records in 00:13:35.658 1+0 records out 00:13:35.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100069 s, 4.1 MB/s 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.658 08:40:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:13:35.918 /dev/nbd9 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.918 1+0 records in 00:13:35.918 1+0 records out 00:13:35.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000988388 s, 4.1 MB/s 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.918 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.177 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd0", 00:13:36.177 "bdev_name": "Malloc0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd1", 00:13:36.177 "bdev_name": "Malloc1p0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd10", 00:13:36.177 "bdev_name": "Malloc1p1" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd11", 00:13:36.177 "bdev_name": "Malloc2p0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd12", 00:13:36.177 "bdev_name": "Malloc2p1" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd13", 00:13:36.177 "bdev_name": "Malloc2p2" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd14", 00:13:36.177 "bdev_name": "Malloc2p3" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd15", 00:13:36.177 "bdev_name": "Malloc2p4" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd2", 00:13:36.177 "bdev_name": "Malloc2p5" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd3", 00:13:36.177 "bdev_name": "Malloc2p6" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd4", 00:13:36.177 "bdev_name": "Malloc2p7" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd5", 00:13:36.177 "bdev_name": "TestPT" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd6", 00:13:36.177 "bdev_name": "raid0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd7", 00:13:36.177 "bdev_name": "concat0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd8", 00:13:36.177 "bdev_name": "raid1" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd9", 00:13:36.177 "bdev_name": "AIO0" 00:13:36.177 } 00:13:36.177 ]' 00:13:36.177 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd0", 00:13:36.177 "bdev_name": "Malloc0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd1", 00:13:36.177 "bdev_name": "Malloc1p0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd10", 00:13:36.177 "bdev_name": "Malloc1p1" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd11", 00:13:36.177 "bdev_name": "Malloc2p0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd12", 00:13:36.177 "bdev_name": "Malloc2p1" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd13", 00:13:36.177 "bdev_name": "Malloc2p2" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd14", 00:13:36.177 "bdev_name": "Malloc2p3" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd15", 00:13:36.177 "bdev_name": "Malloc2p4" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd2", 00:13:36.177 "bdev_name": "Malloc2p5" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd3", 00:13:36.177 "bdev_name": "Malloc2p6" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd4", 00:13:36.177 "bdev_name": "Malloc2p7" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd5", 00:13:36.177 "bdev_name": "TestPT" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd6", 00:13:36.177 "bdev_name": "raid0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd7", 00:13:36.177 "bdev_name": "concat0" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd8", 00:13:36.177 "bdev_name": "raid1" 00:13:36.177 }, 00:13:36.177 { 00:13:36.177 "nbd_device": "/dev/nbd9", 00:13:36.177 "bdev_name": "AIO0" 00:13:36.177 } 00:13:36.177 ]' 00:13:36.177 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:36.435 /dev/nbd1 00:13:36.435 /dev/nbd10 00:13:36.435 /dev/nbd11 00:13:36.435 /dev/nbd12 00:13:36.435 /dev/nbd13 00:13:36.435 /dev/nbd14 00:13:36.435 /dev/nbd15 00:13:36.435 /dev/nbd2 00:13:36.435 /dev/nbd3 00:13:36.435 /dev/nbd4 00:13:36.435 /dev/nbd5 00:13:36.435 /dev/nbd6 00:13:36.435 /dev/nbd7 00:13:36.435 /dev/nbd8 00:13:36.435 /dev/nbd9' 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:36.435 /dev/nbd1 00:13:36.435 /dev/nbd10 00:13:36.435 /dev/nbd11 00:13:36.435 /dev/nbd12 00:13:36.435 /dev/nbd13 00:13:36.435 /dev/nbd14 00:13:36.435 /dev/nbd15 00:13:36.435 /dev/nbd2 00:13:36.435 /dev/nbd3 00:13:36.435 /dev/nbd4 00:13:36.435 /dev/nbd5 00:13:36.435 /dev/nbd6 00:13:36.435 /dev/nbd7 00:13:36.435 /dev/nbd8 00:13:36.435 /dev/nbd9' 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:36.435 256+0 records in 00:13:36.435 256+0 records out 00:13:36.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00840578 s, 125 MB/s 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.435 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:36.435 256+0 records in 00:13:36.436 256+0 records out 00:13:36.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154262 s, 6.8 MB/s 00:13:36.436 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.436 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:36.694 256+0 records in 00:13:36.694 256+0 records out 00:13:36.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156506 s, 6.7 MB/s 00:13:36.694 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.694 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:36.953 256+0 records in 00:13:36.953 256+0 records out 00:13:36.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157385 s, 6.7 MB/s 00:13:36.954 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.954 08:40:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:36.954 256+0 records in 00:13:36.954 256+0 records out 00:13:36.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157372 s, 6.7 MB/s 00:13:36.954 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.954 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:37.212 256+0 records in 00:13:37.212 256+0 records out 00:13:37.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157138 s, 6.7 MB/s 00:13:37.212 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.212 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:37.212 256+0 records in 00:13:37.212 256+0 records out 00:13:37.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155789 s, 6.7 MB/s 00:13:37.212 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.213 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:37.471 256+0 records in 00:13:37.471 256+0 records out 00:13:37.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153666 s, 6.8 MB/s 00:13:37.471 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.471 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:13:37.730 256+0 records in 00:13:37.730 256+0 records out 00:13:37.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157909 s, 6.6 MB/s 00:13:37.730 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.730 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:13:37.730 256+0 records in 00:13:37.730 256+0 records out 00:13:37.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152609 s, 6.9 MB/s 00:13:37.730 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.730 08:40:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:13:38.000 256+0 records in 00:13:38.000 256+0 records out 00:13:38.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154253 s, 6.8 MB/s 00:13:38.000 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.000 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:13:38.000 256+0 records in 00:13:38.000 256+0 records out 00:13:38.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15524 s, 6.8 MB/s 00:13:38.000 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.000 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:13:38.263 256+0 records in 00:13:38.263 256+0 records out 00:13:38.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156201 s, 6.7 MB/s 00:13:38.263 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.264 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:13:38.521 256+0 records in 00:13:38.521 256+0 records out 00:13:38.522 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15548 s, 6.7 MB/s 00:13:38.522 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.522 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:13:38.522 256+0 records in 00:13:38.522 256+0 records out 00:13:38.522 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15706 s, 6.7 MB/s 00:13:38.522 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.522 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:13:38.780 256+0 records in 00:13:38.780 256+0 records out 00:13:38.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158158 s, 6.6 MB/s 00:13:38.780 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.780 08:40:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:13:39.038 256+0 records in 00:13:39.038 256+0 records out 00:13:39.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227115 s, 4.6 MB/s 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.038 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.039 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.606 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.865 08:40:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.123 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.382 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.641 08:40:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.899 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:41.158 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:41.158 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:41.158 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:41.158 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.158 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.158 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:41.158 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:41.417 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:41.417 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.417 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:41.417 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.417 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.417 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.417 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.677 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.936 08:40:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.194 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.452 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.711 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:42.972 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:42.972 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.973 08:40:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.232 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.491 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.750 08:40:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:44.317 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:44.576 malloc_lvol_verify 00:13:44.576 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:44.835 45dd7122-9527-467e-8f46-f70e08f9b7b9 00:13:44.835 08:40:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:45.093 25c39027-7311-4c4c-80e8-ad8811f903e4 00:13:45.093 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:45.352 /dev/nbd0 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:45.352 mke2fs 1.45.5 (07-Jan-2020) 00:13:45.352 00:13:45.352 Filesystem too small for a journal 00:13:45.352 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:45.352 00:13:45.352 Allocating group tables: 0/1 done 00:13:45.352 Writing inode tables: 0/1 done 00:13:45.352 Writing superblocks and filesystem accounting information: 0/1 done 00:13:45.352 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.352 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 117174 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 117174 ']' 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 117174 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117174 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117174' 00:13:45.611 killing process with pid 117174 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 117174 00:13:45.611 08:40:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 117174 00:13:48.195 ************************************ 00:13:48.195 END TEST bdev_nbd 00:13:48.195 ************************************ 00:13:48.195 08:40:23 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:13:48.195 00:13:48.195 real 0m28.340s 00:13:48.195 user 0m38.863s 00:13:48.195 sys 0m9.798s 00:13:48.195 08:40:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.195 08:40:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:48.195 08:40:23 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:48.195 08:40:23 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:13:48.195 08:40:23 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:13:48.195 08:40:23 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:13:48.195 08:40:23 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:13:48.195 08:40:23 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:48.195 08:40:23 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.195 08:40:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:48.195 ************************************ 00:13:48.195 START TEST bdev_fio 00:13:48.195 ************************************ 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:13:48.195 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.195 08:40:23 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:48.195 ************************************ 00:13:48.195 START TEST bdev_fio_rw_verify 00:13:48.195 ************************************ 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:48.195 08:40:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:48.195 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:48.195 fio-3.35 00:13:48.196 Starting 16 threads 00:14:00.449 00:14:00.449 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=118435: Fri Jul 12 08:40:35 2024 00:14:00.449 read: IOPS=68.1k, BW=266MiB/s (279MB/s)(2662MiB/10006msec) 00:14:00.449 slat (usec): min=2, max=33940, avg=42.61, stdev=467.18 00:14:00.449 clat (usec): min=11, max=42949, avg=348.56, stdev=1406.92 00:14:00.449 lat (usec): min=31, max=42985, avg=391.17, stdev=1481.77 00:14:00.449 clat percentiles (usec): 00:14:00.449 | 50.000th=[ 204], 99.000th=[ 1352], 99.900th=[16450], 99.990th=[24511], 00:14:00.449 | 99.999th=[42730] 00:14:00.449 write: IOPS=110k, BW=428MiB/s (449MB/s)(4218MiB/9857msec); 0 zone resets 00:14:00.449 slat (usec): min=6, max=45806, avg=70.77, stdev=632.66 00:14:00.449 clat (usec): min=7, max=46130, avg=434.45, stdev=1567.13 00:14:00.449 lat (usec): min=34, max=46159, avg=505.22, stdev=1689.03 00:14:00.449 clat percentiles (usec): 00:14:00.449 | 50.000th=[ 258], 99.000th=[ 8291], 99.900th=[17171], 99.990th=[28181], 00:14:00.449 | 99.999th=[40109] 00:14:00.449 bw ( KiB/s): min=254992, max=695424, per=97.79%, avg=428491.74, stdev=7839.47, samples=304 00:14:00.449 iops : min=63748, max=173856, avg=107122.89, stdev=1959.86, samples=304 00:14:00.449 lat (usec) : 10=0.01%, 20=0.01%, 50=0.26%, 100=7.30%, 250=48.32% 00:14:00.449 lat (usec) : 500=39.08%, 750=2.94%, 1000=0.45% 00:14:00.449 lat (msec) : 2=0.52%, 4=0.07%, 10=0.16%, 20=0.80%, 50=0.08% 00:14:00.449 cpu : usr=58.41%, sys=1.78%, ctx=221955, majf=0, minf=74897 00:14:00.449 IO depths : 1=11.5%, 2=24.0%, 4=51.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.449 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.449 issued rwts: total=681544,1079763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:00.449 00:14:00.449 Run status group 0 (all jobs): 00:14:00.449 READ: bw=266MiB/s (279MB/s), 266MiB/s-266MiB/s (279MB/s-279MB/s), io=2662MiB (2792MB), run=10006-10006msec 00:14:00.449 WRITE: bw=428MiB/s (449MB/s), 428MiB/s-428MiB/s (449MB/s-449MB/s), io=4218MiB (4423MB), run=9857-9857msec 00:14:02.987 ----------------------------------------------------- 00:14:02.987 Suppressions used: 00:14:02.987 count bytes template 00:14:02.987 16 140 /usr/src/fio/parse.c 00:14:02.987 12704 1219584 /usr/src/fio/iolog.c 00:14:02.987 2 596 libcrypto.so 00:14:02.987 ----------------------------------------------------- 00:14:02.987 00:14:02.987 00:14:02.987 real 0m14.496s 00:14:02.987 user 1m39.531s 00:14:02.987 sys 0m3.814s 00:14:02.987 08:40:37 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.987 08:40:37 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:02.987 ************************************ 00:14:02.987 END TEST bdev_fio_rw_verify 00:14:02.987 ************************************ 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:02.987 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:02.989 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "da70be7b-7828-40b7-b2c5-91979cde2c68"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "da70be7b-7828-40b7-b2c5-91979cde2c68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "c5386f8b-7e7a-58be-8a9c-ff135069ee49"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c5386f8b-7e7a-58be-8a9c-ff135069ee49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "cd95d303-38fe-5a0e-923c-5a0b1f69272d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cd95d303-38fe-5a0e-923c-5a0b1f69272d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "6860fcc8-0ebf-5e45-8881-c181c108d6ee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6860fcc8-0ebf-5e45-8881-c181c108d6ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "561dbdc6-2564-58c3-b183-ac85ed4a739a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "561dbdc6-2564-58c3-b183-ac85ed4a739a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "becc74e0-3d23-5c06-b028-d2162b45131c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "becc74e0-3d23-5c06-b028-d2162b45131c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "b4c6c239-2545-5071-83d7-8bde3fcbf9f0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4c6c239-2545-5071-83d7-8bde3fcbf9f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "6a2d8e78-aa1e-5780-8aa8-27187bf407a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6a2d8e78-aa1e-5780-8aa8-27187bf407a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "67717f40-fe76-5357-9069-7fd127d23f92"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "67717f40-fe76-5357-9069-7fd127d23f92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "44abbedd-0807-5e0b-b76f-410c4224021d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "44abbedd-0807-5e0b-b76f-410c4224021d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e024d113-6932-56ec-b137-ed1a9269eb97"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e024d113-6932-56ec-b137-ed1a9269eb97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2db57938-14d6-570b-909b-6ea0dc7ca472"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2db57938-14d6-570b-909b-6ea0dc7ca472",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e7a810be-8d08-4dde-bab8-b59e89656364"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e7a810be-8d08-4dde-bab8-b59e89656364",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e7a810be-8d08-4dde-bab8-b59e89656364",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c5ab9a36-1b05-4436-9b51-50fbdc87f3f1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "722eef3e-7c2a-4e80-9ab1-8e7495648c10",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "600f40d6-d9ee-4464-b762-4775792ca09e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "600f40d6-d9ee-4464-b762-4775792ca09e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "600f40d6-d9ee-4464-b762-4775792ca09e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "12e5a2b8-6098-4d5f-8422-39be542facbe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "aa89c180-c168-4651-8339-4ae9837c49fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "52813fcf-19b5-4433-b2e7-893b4495ea03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "4c60c27c-2d34-48f8-b7cd-031d3f412621",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5f23716c-472e-4035-8718-2db4d2a693a3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5f23716c-472e-4035-8718-2db4d2a693a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:02.989 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:14:02.989 Malloc1p0 00:14:02.989 Malloc1p1 00:14:02.989 Malloc2p0 00:14:02.989 Malloc2p1 00:14:02.989 Malloc2p2 00:14:02.989 Malloc2p3 00:14:02.989 Malloc2p4 00:14:02.989 Malloc2p5 00:14:02.989 Malloc2p6 00:14:02.989 Malloc2p7 00:14:02.989 TestPT 00:14:02.989 raid0 00:14:02.989 concat0 ]] 00:14:02.989 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "da70be7b-7828-40b7-b2c5-91979cde2c68"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "da70be7b-7828-40b7-b2c5-91979cde2c68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "c5386f8b-7e7a-58be-8a9c-ff135069ee49"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c5386f8b-7e7a-58be-8a9c-ff135069ee49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "cd95d303-38fe-5a0e-923c-5a0b1f69272d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cd95d303-38fe-5a0e-923c-5a0b1f69272d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "6860fcc8-0ebf-5e45-8881-c181c108d6ee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6860fcc8-0ebf-5e45-8881-c181c108d6ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "561dbdc6-2564-58c3-b183-ac85ed4a739a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "561dbdc6-2564-58c3-b183-ac85ed4a739a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "becc74e0-3d23-5c06-b028-d2162b45131c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "becc74e0-3d23-5c06-b028-d2162b45131c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "b4c6c239-2545-5071-83d7-8bde3fcbf9f0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4c6c239-2545-5071-83d7-8bde3fcbf9f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "6a2d8e78-aa1e-5780-8aa8-27187bf407a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6a2d8e78-aa1e-5780-8aa8-27187bf407a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "67717f40-fe76-5357-9069-7fd127d23f92"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "67717f40-fe76-5357-9069-7fd127d23f92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "44abbedd-0807-5e0b-b76f-410c4224021d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "44abbedd-0807-5e0b-b76f-410c4224021d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e024d113-6932-56ec-b137-ed1a9269eb97"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e024d113-6932-56ec-b137-ed1a9269eb97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2db57938-14d6-570b-909b-6ea0dc7ca472"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2db57938-14d6-570b-909b-6ea0dc7ca472",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e7a810be-8d08-4dde-bab8-b59e89656364"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e7a810be-8d08-4dde-bab8-b59e89656364",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e7a810be-8d08-4dde-bab8-b59e89656364",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c5ab9a36-1b05-4436-9b51-50fbdc87f3f1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "722eef3e-7c2a-4e80-9ab1-8e7495648c10",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "600f40d6-d9ee-4464-b762-4775792ca09e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "600f40d6-d9ee-4464-b762-4775792ca09e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "600f40d6-d9ee-4464-b762-4775792ca09e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "12e5a2b8-6098-4d5f-8422-39be542facbe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "aa89c180-c168-4651-8339-4ae9837c49fc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0eafa5a3-b0dd-4b25-ad32-7b89993ef6d1",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "52813fcf-19b5-4433-b2e7-893b4495ea03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "4c60c27c-2d34-48f8-b7cd-031d3f412621",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5f23716c-472e-4035-8718-2db4d2a693a3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5f23716c-472e-4035-8718-2db4d2a693a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:14:02.990 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:14:02.991 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:02.991 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:14:02.991 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:14:02.991 08:40:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:02.991 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:14:02.991 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.991 08:40:37 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:02.991 ************************************ 00:14:02.991 START TEST bdev_fio_trim 00:14:02.991 ************************************ 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:02.991 08:40:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:02.991 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:02.991 fio-3.35 00:14:02.991 Starting 14 threads 00:14:15.224 00:14:15.224 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=118675: Fri Jul 12 08:40:49 2024 00:14:15.224 write: IOPS=121k, BW=471MiB/s (494MB/s)(4713MiB/10004msec); 0 zone resets 00:14:15.224 slat (usec): min=3, max=24052, avg=42.83, stdev=435.01 00:14:15.224 clat (usec): min=27, max=32344, avg=281.67, stdev=1133.77 00:14:15.224 lat (usec): min=41, max=32381, avg=324.51, stdev=1214.09 00:14:15.224 clat percentiles (usec): 00:14:15.224 | 50.000th=[ 192], 99.000th=[ 404], 99.900th=[16319], 99.990th=[20317], 00:14:15.224 | 99.999th=[24511] 00:14:15.224 bw ( KiB/s): min=338476, max=703924, per=99.35%, avg=479237.00, stdev=8040.08, samples=266 00:14:15.224 iops : min=84619, max=175981, avg=119809.21, stdev=2010.02, samples=266 00:14:15.224 trim: IOPS=121k, BW=471MiB/s (494MB/s)(4713MiB/10004msec); 0 zone resets 00:14:15.224 slat (usec): min=5, max=32043, avg=28.26, stdev=358.28 00:14:15.224 clat (usec): min=4, max=32381, avg=323.99, stdev=1213.35 00:14:15.224 lat (usec): min=15, max=32405, avg=352.25, stdev=1265.00 00:14:15.224 clat percentiles (usec): 00:14:15.224 | 50.000th=[ 223], 99.000th=[ 457], 99.900th=[16319], 99.990th=[20317], 00:14:15.224 | 99.999th=[24511] 00:14:15.224 bw ( KiB/s): min=338484, max=703916, per=99.35%, avg=479237.00, stdev=8040.01, samples=266 00:14:15.224 iops : min=84621, max=175979, avg=119809.21, stdev=2010.00, samples=266 00:14:15.224 lat (usec) : 10=0.01%, 20=0.01%, 50=0.23%, 100=4.63%, 250=64.45% 00:14:15.224 lat (usec) : 500=29.90%, 750=0.10%, 1000=0.03% 00:14:15.224 lat (msec) : 2=0.03%, 4=0.01%, 10=0.06%, 20=0.53%, 50=0.02% 00:14:15.224 cpu : usr=68.80%, sys=0.35%, ctx=169595, majf=0, minf=805 00:14:15.224 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.224 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.224 issued rwts: total=0,1206443,1206446,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.224 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:15.224 00:14:15.224 Run status group 0 (all jobs): 00:14:15.224 WRITE: bw=471MiB/s (494MB/s), 471MiB/s-471MiB/s (494MB/s-494MB/s), io=4713MiB (4942MB), run=10004-10004msec 00:14:15.224 TRIM: bw=471MiB/s (494MB/s), 471MiB/s-471MiB/s (494MB/s-494MB/s), io=4713MiB (4942MB), run=10004-10004msec 00:14:17.229 ----------------------------------------------------- 00:14:17.229 Suppressions used: 00:14:17.229 count bytes template 00:14:17.229 14 129 /usr/src/fio/parse.c 00:14:17.229 2 596 libcrypto.so 00:14:17.229 ----------------------------------------------------- 00:14:17.229 00:14:17.229 00:14:17.229 real 0m14.274s 00:14:17.229 user 1m41.824s 00:14:17.229 sys 0m1.344s 00:14:17.229 08:40:52 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.229 ************************************ 00:14:17.229 END TEST bdev_fio_trim 00:14:17.229 ************************************ 00:14:17.229 08:40:52 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:14:17.229 08:40:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:14:17.229 08:40:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:14:17.229 08:40:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:17.229 /home/vagrant/spdk_repo/spdk 00:14:17.229 08:40:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:14:17.229 08:40:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:14:17.229 00:14:17.229 real 0m29.082s 00:14:17.229 user 3m21.565s 00:14:17.229 sys 0m5.252s 00:14:17.229 08:40:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.229 08:40:52 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:17.229 ************************************ 00:14:17.229 END TEST bdev_fio 00:14:17.229 ************************************ 00:14:17.229 08:40:52 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:17.229 08:40:52 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:17.229 08:40:52 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:17.229 08:40:52 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:17.229 08:40:52 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.229 08:40:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:17.229 ************************************ 00:14:17.229 START TEST bdev_verify 00:14:17.229 ************************************ 00:14:17.229 08:40:52 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:17.229 [2024-07-12 08:40:52.277657] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:14:17.229 [2024-07-12 08:40:52.277855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118881 ] 00:14:17.488 [2024-07-12 08:40:52.448451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.746 [2024-07-12 08:40:52.707511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.746 [2024-07-12 08:40:52.707515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.004 [2024-07-12 08:40:53.093645] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:18.004 [2024-07-12 08:40:53.093734] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:18.004 [2024-07-12 08:40:53.101598] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:18.004 [2024-07-12 08:40:53.101662] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:18.004 [2024-07-12 08:40:53.109625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:18.004 [2024-07-12 08:40:53.109722] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:18.004 [2024-07-12 08:40:53.109747] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:18.262 [2024-07-12 08:40:53.302756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:18.262 [2024-07-12 08:40:53.302897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.262 [2024-07-12 08:40:53.302987] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:18.262 [2024-07-12 08:40:53.303023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.262 [2024-07-12 08:40:53.305813] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.262 [2024-07-12 08:40:53.305867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:18.825 Running I/O for 5 seconds... 00:14:24.112 00:14:24.112 Latency(us) 00:14:24.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.112 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x1000 00:14:24.112 Malloc0 : 5.13 1346.57 5.26 0.00 0.00 94897.51 625.57 194463.19 00:14:24.112 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x1000 length 0x1000 00:14:24.112 Malloc0 : 5.14 1345.03 5.25 0.00 0.00 95006.97 595.78 312666.30 00:14:24.112 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x800 00:14:24.112 Malloc1p0 : 5.18 691.79 2.70 0.00 0.00 184250.47 3470.43 184930.68 00:14:24.112 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x800 length 0x800 00:14:24.112 Malloc1p0 : 5.14 697.11 2.72 0.00 0.00 182866.56 3485.32 176351.42 00:14:24.112 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x800 00:14:24.112 Malloc1p1 : 5.18 691.52 2.70 0.00 0.00 183895.37 3351.27 181117.67 00:14:24.112 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x800 length 0x800 00:14:24.112 Malloc1p1 : 5.14 696.81 2.72 0.00 0.00 182515.25 3291.69 171585.16 00:14:24.112 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x200 00:14:24.112 Malloc2p0 : 5.18 691.24 2.70 0.00 0.00 183554.45 3217.22 177304.67 00:14:24.112 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x200 length 0x200 00:14:24.112 Malloc2p0 : 5.15 696.52 2.72 0.00 0.00 182194.11 3202.33 168725.41 00:14:24.112 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x200 00:14:24.112 Malloc2p1 : 5.19 690.97 2.70 0.00 0.00 183207.08 3247.01 173491.67 00:14:24.112 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x200 length 0x200 00:14:24.112 Malloc2p1 : 5.15 696.22 2.72 0.00 0.00 181845.70 3232.12 164912.41 00:14:24.112 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x200 00:14:24.112 Malloc2p2 : 5.19 690.69 2.70 0.00 0.00 182873.24 3336.38 169678.66 00:14:24.112 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x200 length 0x200 00:14:24.112 Malloc2p2 : 5.15 695.92 2.72 0.00 0.00 181514.74 3664.06 161099.40 00:14:24.112 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x200 00:14:24.112 Malloc2p3 : 5.19 690.42 2.70 0.00 0.00 182547.17 3381.06 166818.91 00:14:24.112 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x200 length 0x200 00:14:24.112 Malloc2p3 : 5.15 695.63 2.72 0.00 0.00 181171.96 3351.27 156333.15 00:14:24.112 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x200 00:14:24.112 Malloc2p4 : 5.19 690.16 2.70 0.00 0.00 182207.81 3410.85 165865.66 00:14:24.112 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x200 length 0x200 00:14:24.112 Malloc2p4 : 5.15 695.33 2.72 0.00 0.00 180843.97 3395.96 154426.65 00:14:24.112 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.112 Verification LBA range: start 0x0 length 0x200 00:14:24.112 Malloc2p5 : 5.20 689.87 2.69 0.00 0.00 181887.71 3276.80 163005.91 00:14:24.113 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x200 length 0x200 00:14:24.113 Malloc2p5 : 5.16 695.03 2.71 0.00 0.00 180516.22 3276.80 151566.89 00:14:24.113 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x0 length 0x200 00:14:24.113 Malloc2p6 : 5.20 689.57 2.69 0.00 0.00 181581.69 3381.06 160146.15 00:14:24.113 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x200 length 0x200 00:14:24.113 Malloc2p6 : 5.16 694.74 2.71 0.00 0.00 180208.96 3381.06 149660.39 00:14:24.113 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x0 length 0x200 00:14:24.113 Malloc2p7 : 5.20 689.18 2.69 0.00 0.00 181271.73 3291.69 159192.90 00:14:24.113 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x200 length 0x200 00:14:24.113 Malloc2p7 : 5.16 694.45 2.71 0.00 0.00 179886.49 3306.59 146800.64 00:14:24.113 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x0 length 0x1000 00:14:24.113 TestPT : 5.22 686.52 2.68 0.00 0.00 181464.60 11975.21 159192.90 00:14:24.113 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x1000 length 0x1000 00:14:24.113 TestPT : 5.21 688.10 2.69 0.00 0.00 181064.69 10724.07 224967.21 00:14:24.113 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x0 length 0x2000 00:14:24.113 raid0 : 5.21 688.44 2.69 0.00 0.00 180526.14 3515.11 154426.65 00:14:24.113 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x2000 length 0x2000 00:14:24.113 raid0 : 5.21 712.21 2.78 0.00 0.00 174533.84 3530.01 130595.37 00:14:24.113 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x0 length 0x2000 00:14:24.113 concat0 : 5.21 688.13 2.69 0.00 0.00 180179.05 3500.22 161099.40 00:14:24.113 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x2000 length 0x2000 00:14:24.113 concat0 : 5.21 711.80 2.78 0.00 0.00 174233.87 3470.43 134408.38 00:14:24.113 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x0 length 0x1000 00:14:24.113 raid1 : 5.21 687.68 2.69 0.00 0.00 179827.56 4140.68 167772.16 00:14:24.113 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x1000 length 0x1000 00:14:24.113 raid1 : 5.22 711.53 2.78 0.00 0.00 173847.65 4289.63 140127.88 00:14:24.113 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x0 length 0x4e2 00:14:24.113 AIO0 : 5.22 710.01 2.77 0.00 0.00 173717.45 815.48 172538.41 00:14:24.113 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.113 Verification LBA range: start 0x4e2 length 0x4e2 00:14:24.113 AIO0 : 5.22 710.75 2.78 0.00 0.00 173543.72 2323.55 148707.14 00:14:24.113 =================================================================================================================== 00:14:24.113 Total : 23549.95 91.99 0.00 0.00 170734.94 595.78 312666.30 00:14:26.010 00:14:26.010 real 0m8.910s 00:14:26.010 user 0m16.148s 00:14:26.010 sys 0m0.532s 00:14:26.010 08:41:01 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.010 08:41:01 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:26.010 ************************************ 00:14:26.010 END TEST bdev_verify 00:14:26.010 ************************************ 00:14:26.010 08:41:01 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:26.010 08:41:01 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:26.010 08:41:01 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:26.010 08:41:01 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.010 08:41:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:26.010 ************************************ 00:14:26.010 START TEST bdev_verify_big_io 00:14:26.010 ************************************ 00:14:26.010 08:41:01 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:26.268 [2024-07-12 08:41:01.228324] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:14:26.268 [2024-07-12 08:41:01.228781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119031 ] 00:14:26.268 [2024-07-12 08:41:01.404874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:26.526 [2024-07-12 08:41:01.622914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.526 [2024-07-12 08:41:01.622924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.095 [2024-07-12 08:41:02.006113] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:27.095 [2024-07-12 08:41:02.006198] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:27.095 [2024-07-12 08:41:02.014064] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:27.095 [2024-07-12 08:41:02.014140] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:27.095 [2024-07-12 08:41:02.022087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:27.095 [2024-07-12 08:41:02.022199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:27.095 [2024-07-12 08:41:02.022225] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:27.096 [2024-07-12 08:41:02.250179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:27.096 [2024-07-12 08:41:02.250306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.096 [2024-07-12 08:41:02.250391] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:27.096 [2024-07-12 08:41:02.250430] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.096 [2024-07-12 08:41:02.253760] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.096 [2024-07-12 08:41:02.253832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:27.661 [2024-07-12 08:41:02.639637] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.643028] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.646929] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.650716] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.654330] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.658122] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.661600] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.665768] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.669096] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.672954] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.676277] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.680228] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.683515] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.687329] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.691185] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.694456] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:27.661 [2024-07-12 08:41:02.777396] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:27.661 [2024-07-12 08:41:02.784057] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:27.661 Running I/O for 5 seconds... 00:14:34.240 00:14:34.240 Latency(us) 00:14:34.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.240 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x100 00:14:34.240 Malloc0 : 5.73 223.53 13.97 0.00 0.00 564252.34 882.50 1731103.65 00:14:34.240 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x100 length 0x100 00:14:34.240 Malloc0 : 5.79 221.09 13.82 0.00 0.00 569492.03 830.37 1753981.67 00:14:34.240 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x80 00:14:34.240 Malloc1p0 : 5.94 122.65 7.67 0.00 0.00 982637.35 2785.28 2043769.95 00:14:34.240 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x80 length 0x80 00:14:34.240 Malloc1p0 : 6.46 44.58 2.79 0.00 0.00 2644517.92 1601.16 4331572.13 00:14:34.240 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x80 00:14:34.240 Malloc1p1 : 6.19 46.53 2.91 0.00 0.00 2505918.77 1623.51 4179051.99 00:14:34.240 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x80 length 0x80 00:14:34.240 Malloc1p1 : 6.46 44.57 2.79 0.00 0.00 2576031.72 1526.69 4179051.99 00:14:34.240 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p0 : 5.87 32.69 2.04 0.00 0.00 890777.53 726.11 1509949.44 00:14:34.240 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p0 : 5.93 32.39 2.02 0.00 0.00 893660.36 662.81 1471819.40 00:14:34.240 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p1 : 5.87 32.68 2.04 0.00 0.00 885226.93 670.25 1494697.43 00:14:34.240 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p1 : 5.93 32.38 2.02 0.00 0.00 886983.22 670.25 1448941.38 00:14:34.240 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p2 : 5.88 32.67 2.04 0.00 0.00 879526.58 655.36 1479445.41 00:14:34.240 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p2 : 5.93 32.37 2.02 0.00 0.00 880763.47 677.70 1433689.37 00:14:34.240 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p3 : 5.88 32.67 2.04 0.00 0.00 874140.96 722.39 1456567.39 00:14:34.240 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p3 : 5.93 32.36 2.02 0.00 0.00 874283.98 714.94 1410811.35 00:14:34.240 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p4 : 5.94 35.03 2.19 0.00 0.00 816975.17 688.87 1441315.37 00:14:34.240 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p4 : 5.93 32.35 2.02 0.00 0.00 867547.10 659.08 1387933.32 00:14:34.240 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p5 : 5.94 35.02 2.19 0.00 0.00 811923.75 685.15 1418437.35 00:14:34.240 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p5 : 6.05 34.38 2.15 0.00 0.00 814958.64 711.21 1372681.31 00:14:34.240 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p6 : 5.94 35.01 2.19 0.00 0.00 806761.13 685.15 1403185.34 00:14:34.240 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p6 : 6.05 34.37 2.15 0.00 0.00 809189.89 673.98 1349803.29 00:14:34.240 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x20 00:14:34.240 Malloc2p7 : 5.94 35.00 2.19 0.00 0.00 801437.48 711.21 1380307.32 00:14:34.240 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x20 length 0x20 00:14:34.240 Malloc2p7 : 6.05 34.36 2.15 0.00 0.00 803009.49 659.08 1326925.27 00:14:34.240 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x100 00:14:34.240 TestPT : 6.29 46.10 2.88 0.00 0.00 2331890.04 81979.58 3538467.37 00:14:34.240 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x100 length 0x100 00:14:34.240 TestPT : 6.42 44.86 2.80 0.00 0.00 2357629.47 71017.19 3599475.43 00:14:34.240 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:34.240 Verification LBA range: start 0x0 length 0x200 00:14:34.240 raid0 : 6.39 50.05 3.13 0.00 0.00 2076409.67 2025.66 3751995.58 00:14:34.241 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:34.241 Verification LBA range: start 0x200 length 0x200 00:14:34.241 raid0 : 6.42 57.15 3.57 0.00 0.00 1843208.51 1653.29 3751995.58 00:14:34.241 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:34.241 Verification LBA range: start 0x0 length 0x200 00:14:34.241 concat0 : 6.40 57.54 3.60 0.00 0.00 1769324.51 2010.76 3584223.42 00:14:34.241 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:34.241 Verification LBA range: start 0x200 length 0x200 00:14:34.241 concat0 : 6.43 68.48 4.28 0.00 0.00 1509111.53 1660.74 3614727.45 00:14:34.241 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:34.241 Verification LBA range: start 0x0 length 0x100 00:14:34.241 raid1 : 6.43 84.59 5.29 0.00 0.00 1203749.87 2129.92 3446955.29 00:14:34.241 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:34.241 Verification LBA range: start 0x100 length 0x100 00:14:34.241 raid1 : 6.46 64.44 4.03 0.00 0.00 1568902.62 2115.03 3477459.32 00:14:34.241 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:14:34.241 Verification LBA range: start 0x0 length 0x4e 00:14:34.241 AIO0 : 6.44 76.14 4.76 0.00 0.00 798804.47 1266.04 2043769.95 00:14:34.241 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:14:34.241 Verification LBA range: start 0x4e length 0x4e 00:14:34.241 AIO0 : 6.47 61.25 3.83 0.00 0.00 983344.65 1772.45 2059021.96 00:14:34.241 =================================================================================================================== 00:14:34.241 Total : 1849.29 115.58 0.00 0.00 1172159.59 655.36 4331572.13 00:14:36.770 00:14:36.770 real 0m10.744s 00:14:36.770 user 0m19.899s 00:14:36.770 sys 0m0.500s 00:14:36.770 ************************************ 00:14:36.770 END TEST bdev_verify_big_io 00:14:36.770 ************************************ 00:14:36.770 08:41:11 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.770 08:41:11 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.770 08:41:11 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:36.770 08:41:11 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:36.770 08:41:11 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:36.770 08:41:11 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.770 08:41:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:36.770 ************************************ 00:14:36.770 START TEST bdev_write_zeroes 00:14:36.770 ************************************ 00:14:36.770 08:41:11 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:37.029 [2024-07-12 08:41:12.022931] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:14:37.029 [2024-07-12 08:41:12.023170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119195 ] 00:14:37.029 [2024-07-12 08:41:12.183732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.288 [2024-07-12 08:41:12.456407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.915 [2024-07-12 08:41:12.839190] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:37.915 [2024-07-12 08:41:12.839305] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:37.915 [2024-07-12 08:41:12.847158] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:37.915 [2024-07-12 08:41:12.847230] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:37.915 [2024-07-12 08:41:12.855174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:37.915 [2024-07-12 08:41:12.855255] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:37.915 [2024-07-12 08:41:12.855309] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:37.915 [2024-07-12 08:41:13.051836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:37.915 [2024-07-12 08:41:13.051986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.915 [2024-07-12 08:41:13.052020] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:37.915 [2024-07-12 08:41:13.052053] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.915 [2024-07-12 08:41:13.054651] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.915 [2024-07-12 08:41:13.054718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:38.481 Running I/O for 1 seconds... 00:14:39.416 00:14:39.416 Latency(us) 00:14:39.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.416 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc0 : 1.04 4677.04 18.27 0.00 0.00 27343.51 975.59 45517.73 00:14:39.416 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc1p0 : 1.04 4670.48 18.24 0.00 0.00 27318.36 1318.17 44087.85 00:14:39.416 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc1p1 : 1.04 4663.99 18.22 0.00 0.00 27277.95 1035.17 42896.29 00:14:39.416 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p0 : 1.04 4657.55 18.19 0.00 0.00 27258.28 983.04 41943.04 00:14:39.416 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p1 : 1.05 4651.07 18.17 0.00 0.00 27234.89 975.59 40989.79 00:14:39.416 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p2 : 1.05 4644.58 18.14 0.00 0.00 27213.63 975.59 40036.54 00:14:39.416 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p3 : 1.05 4638.25 18.12 0.00 0.00 27188.27 990.49 39321.60 00:14:39.416 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p4 : 1.05 4631.94 18.09 0.00 0.00 27175.09 983.04 38368.35 00:14:39.416 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p5 : 1.05 4625.48 18.07 0.00 0.00 27148.26 1035.17 37415.10 00:14:39.416 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p6 : 1.05 4619.19 18.04 0.00 0.00 27123.02 968.15 36461.85 00:14:39.416 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 Malloc2p7 : 1.05 4612.89 18.02 0.00 0.00 27101.27 997.93 35508.60 00:14:39.416 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 TestPT : 1.06 4606.41 17.99 0.00 0.00 27074.20 983.04 34555.35 00:14:39.416 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 raid0 : 1.06 4598.98 17.96 0.00 0.00 27045.09 1750.11 32887.16 00:14:39.416 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 concat0 : 1.07 4674.56 18.26 0.00 0.00 26513.57 1750.11 31218.97 00:14:39.416 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 raid1 : 1.07 4665.90 18.23 0.00 0.00 26437.36 2770.39 29193.31 00:14:39.416 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:39.416 AIO0 : 1.07 4657.07 18.19 0.00 0.00 26346.44 1608.61 29193.31 00:14:39.416 =================================================================================================================== 00:14:39.416 Total : 74295.38 290.22 0.00 0.00 27046.95 968.15 45517.73 00:14:41.946 00:14:41.946 real 0m4.806s 00:14:41.946 user 0m4.232s 00:14:41.946 sys 0m0.397s 00:14:41.946 ************************************ 00:14:41.946 END TEST bdev_write_zeroes 00:14:41.946 ************************************ 00:14:41.946 08:41:16 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.946 08:41:16 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:41.946 08:41:16 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:41.946 08:41:16 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:41.946 08:41:16 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:41.946 08:41:16 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.946 08:41:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:41.946 ************************************ 00:14:41.946 START TEST bdev_json_nonenclosed 00:14:41.946 ************************************ 00:14:41.946 08:41:16 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:41.946 [2024-07-12 08:41:16.868112] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:14:41.946 [2024-07-12 08:41:16.868412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119276 ] 00:14:41.946 [2024-07-12 08:41:17.044238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.204 [2024-07-12 08:41:17.296119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.204 [2024-07-12 08:41:17.296303] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:42.204 [2024-07-12 08:41:17.296372] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:42.204 [2024-07-12 08:41:17.296404] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:42.770 00:14:42.770 real 0m0.962s 00:14:42.770 user 0m0.717s 00:14:42.770 sys 0m0.145s 00:14:42.770 ************************************ 00:14:42.770 END TEST bdev_json_nonenclosed 00:14:42.770 ************************************ 00:14:42.770 08:41:17 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:14:42.770 08:41:17 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:42.770 08:41:17 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:42.770 08:41:17 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:14:42.770 08:41:17 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:14:42.770 08:41:17 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:42.770 08:41:17 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:42.770 08:41:17 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.770 08:41:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:42.770 ************************************ 00:14:42.770 START TEST bdev_json_nonarray 00:14:42.770 ************************************ 00:14:42.770 08:41:17 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:42.770 [2024-07-12 08:41:17.878373] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:14:42.770 [2024-07-12 08:41:17.878994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119334 ] 00:14:43.028 [2024-07-12 08:41:18.051832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.286 [2024-07-12 08:41:18.302244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.286 [2024-07-12 08:41:18.302406] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:43.286 [2024-07-12 08:41:18.302480] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:43.286 [2024-07-12 08:41:18.302514] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:43.876 00:14:43.876 real 0m0.963s 00:14:43.876 user 0m0.702s 00:14:43.876 sys 0m0.160s 00:14:43.876 ************************************ 00:14:43.876 08:41:18 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:14:43.876 08:41:18 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.876 08:41:18 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:43.876 END TEST bdev_json_nonarray 00:14:43.876 ************************************ 00:14:43.876 08:41:18 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:14:43.876 08:41:18 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:14:43.876 08:41:18 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:14:43.876 08:41:18 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:14:43.876 08:41:18 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.876 08:41:18 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.876 08:41:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:43.876 ************************************ 00:14:43.876 START TEST bdev_qos 00:14:43.876 ************************************ 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=119372 00:14:43.876 Process qos testing pid: 119372 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 119372' 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 119372 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 119372 ']' 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:43.876 08:41:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:14:43.876 [2024-07-12 08:41:18.882692] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:14:43.876 [2024-07-12 08:41:18.883273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119372 ] 00:14:44.135 [2024-07-12 08:41:19.069630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.394 [2024-07-12 08:41:19.373424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.961 08:41:19 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.961 08:41:19 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:14:44.961 08:41:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:14:44.961 08:41:19 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.961 08:41:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:44.961 Malloc_0 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:44.961 [ 00:14:44.961 { 00:14:44.961 "name": "Malloc_0", 00:14:44.961 "aliases": [ 00:14:44.961 "404637c9-252f-4311-a5dc-4e48d6398f8d" 00:14:44.961 ], 00:14:44.961 "product_name": "Malloc disk", 00:14:44.961 "block_size": 512, 00:14:44.961 "num_blocks": 262144, 00:14:44.961 "uuid": "404637c9-252f-4311-a5dc-4e48d6398f8d", 00:14:44.961 "assigned_rate_limits": { 00:14:44.961 "rw_ios_per_sec": 0, 00:14:44.961 "rw_mbytes_per_sec": 0, 00:14:44.961 "r_mbytes_per_sec": 0, 00:14:44.961 "w_mbytes_per_sec": 0 00:14:44.961 }, 00:14:44.961 "claimed": false, 00:14:44.961 "zoned": false, 00:14:44.961 "supported_io_types": { 00:14:44.961 "read": true, 00:14:44.961 "write": true, 00:14:44.961 "unmap": true, 00:14:44.961 "flush": true, 00:14:44.961 "reset": true, 00:14:44.961 "nvme_admin": false, 00:14:44.961 "nvme_io": false, 00:14:44.961 "nvme_io_md": false, 00:14:44.961 "write_zeroes": true, 00:14:44.961 "zcopy": true, 00:14:44.961 "get_zone_info": false, 00:14:44.961 "zone_management": false, 00:14:44.961 "zone_append": false, 00:14:44.961 "compare": false, 00:14:44.961 "compare_and_write": false, 00:14:44.961 "abort": true, 00:14:44.961 "seek_hole": false, 00:14:44.961 "seek_data": false, 00:14:44.961 "copy": true, 00:14:44.961 "nvme_iov_md": false 00:14:44.961 }, 00:14:44.961 "memory_domains": [ 00:14:44.961 { 00:14:44.961 "dma_device_id": "system", 00:14:44.961 "dma_device_type": 1 00:14:44.961 }, 00:14:44.961 { 00:14:44.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.961 "dma_device_type": 2 00:14:44.961 } 00:14:44.961 ], 00:14:44.961 "driver_specific": {} 00:14:44.961 } 00:14:44.961 ] 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:14:44.961 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:44.962 Null_1 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.962 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:44.962 [ 00:14:44.962 { 00:14:44.962 "name": "Null_1", 00:14:44.962 "aliases": [ 00:14:44.962 "f2498ecd-1399-4a46-bdb7-ae2133e58a22" 00:14:44.962 ], 00:14:44.962 "product_name": "Null disk", 00:14:44.962 "block_size": 512, 00:14:44.962 "num_blocks": 262144, 00:14:44.962 "uuid": "f2498ecd-1399-4a46-bdb7-ae2133e58a22", 00:14:44.962 "assigned_rate_limits": { 00:14:44.962 "rw_ios_per_sec": 0, 00:14:44.962 "rw_mbytes_per_sec": 0, 00:14:44.962 "r_mbytes_per_sec": 0, 00:14:44.962 "w_mbytes_per_sec": 0 00:14:44.962 }, 00:14:44.962 "claimed": false, 00:14:44.962 "zoned": false, 00:14:44.962 "supported_io_types": { 00:14:44.962 "read": true, 00:14:44.962 "write": true, 00:14:44.962 "unmap": false, 00:14:44.962 "flush": false, 00:14:44.962 "reset": true, 00:14:44.962 "nvme_admin": false, 00:14:44.962 "nvme_io": false, 00:14:44.962 "nvme_io_md": false, 00:14:44.962 "write_zeroes": true, 00:14:44.962 "zcopy": false, 00:14:44.962 "get_zone_info": false, 00:14:44.962 "zone_management": false, 00:14:44.962 "zone_append": false, 00:14:44.962 "compare": false, 00:14:44.962 "compare_and_write": false, 00:14:44.962 "abort": true, 00:14:44.962 "seek_hole": false, 00:14:45.220 "seek_data": false, 00:14:45.220 "copy": false, 00:14:45.220 "nvme_iov_md": false 00:14:45.220 }, 00:14:45.220 "driver_specific": {} 00:14:45.220 } 00:14:45.220 ] 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:45.220 08:41:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:14:45.220 Running I/O for 60 seconds... 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 55169.52 220678.10 0.00 0.00 223232.00 0.00 0.00 ' 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=55169.52 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 55169 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=55169 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=13000 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 13000 -gt 1000 ']' 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 13000 Malloc_0 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 13000 IOPS Malloc_0 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.483 08:41:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:50.483 ************************************ 00:14:50.483 START TEST bdev_qos_iops 00:14:50.483 ************************************ 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 13000 IOPS Malloc_0 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=13000 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:50.483 08:41:25 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 12993.48 51973.93 0.00 0.00 52884.00 0.00 0.00 ' 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=12993.48 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 12993 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=12993 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=11700 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=14300 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 12993 -lt 11700 ']' 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 12993 -gt 14300 ']' 00:14:55.745 00:14:55.745 real 0m5.222s 00:14:55.745 user 0m0.135s 00:14:55.745 sys 0m0.026s 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.745 08:41:30 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:14:55.745 ************************************ 00:14:55.745 END TEST bdev_qos_iops 00:14:55.745 ************************************ 00:14:55.745 08:41:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:14:55.745 08:41:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:14:55.745 08:41:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:55.745 08:41:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:14:55.746 08:41:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:55.746 08:41:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:55.746 08:41:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:14:55.746 08:41:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 30005.92 120023.70 0.00 0.00 121856.00 0.00 0.00 ' 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=121856.00 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 121856 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=121856 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=11 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 11 -lt 2 ']' 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.005 08:41:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:01.005 ************************************ 00:15:01.005 START TEST bdev_qos_bw 00:15:01.005 ************************************ 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 11 BANDWIDTH Null_1 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=11 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:01.005 08:41:35 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2831.92 11327.66 0.00 0.00 11592.00 0.00 0.00 ' 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=11592.00 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 11592 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=11592 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=11264 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=10137 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=12390 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 11592 -lt 10137 ']' 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 11592 -gt 12390 ']' 00:15:06.295 00:15:06.295 real 0m5.266s 00:15:06.295 user 0m0.137s 00:15:06.295 sys 0m0.027s 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.295 ************************************ 00:15:06.295 END TEST bdev_qos_bw 00:15:06.295 ************************************ 00:15:06.295 08:41:41 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:15:06.295 08:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:15:06.295 08:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:15:06.295 08:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.296 08:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:06.296 08:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.296 08:41:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:15:06.296 08:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:06.296 08:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.296 08:41:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:06.296 ************************************ 00:15:06.296 START TEST bdev_qos_ro_bw 00:15:06.296 ************************************ 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:06.296 08:41:41 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.38 2045.54 0.00 0.00 2060.00 0.00 0.00 ' 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2060.00 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2060 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2060 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -lt 1843 ']' 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2060 -gt 2252 ']' 00:15:11.556 00:15:11.556 real 0m5.142s 00:15:11.556 user 0m0.097s 00:15:11.556 sys 0m0.017s 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.556 ************************************ 00:15:11.556 END TEST bdev_qos_ro_bw 00:15:11.556 ************************************ 00:15:11.556 08:41:46 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:15:11.556 08:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:15:11.556 08:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:15:11.556 08:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.556 08:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:11.813 08:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.813 08:41:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:15:11.813 08:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.813 08:41:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:12.071 00:15:12.071 Latency(us) 00:15:12.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.071 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:12.071 Malloc_0 : 26.64 18787.92 73.39 0.00 0.00 13499.73 2457.60 503316.48 00:15:12.071 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:12.071 Null_1 : 26.84 23174.76 90.53 0.00 0.00 11023.86 897.40 202089.19 00:15:12.071 =================================================================================================================== 00:15:12.071 Total : 41962.68 163.92 0.00 0.00 12127.70 897.40 503316.48 00:15:12.071 0 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 119372 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 119372 ']' 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 119372 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119372 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:12.072 killing process with pid 119372 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119372' 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 119372 00:15:12.072 Received shutdown signal, test time was about 26.870240 seconds 00:15:12.072 00:15:12.072 Latency(us) 00:15:12.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.072 =================================================================================================================== 00:15:12.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.072 08:41:47 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 119372 00:15:13.455 08:41:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:15:13.455 00:15:13.455 real 0m29.695s 00:15:13.455 user 0m30.430s 00:15:13.455 sys 0m0.700s 00:15:13.455 08:41:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.455 08:41:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:15:13.455 ************************************ 00:15:13.455 END TEST bdev_qos 00:15:13.455 ************************************ 00:15:13.455 08:41:48 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:13.455 08:41:48 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:15:13.455 08:41:48 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.455 08:41:48 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.455 08:41:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:13.455 ************************************ 00:15:13.455 START TEST bdev_qd_sampling 00:15:13.455 ************************************ 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=119897 00:15:13.455 Process bdev QD sampling period testing pid: 119897 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 119897' 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 119897 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 119897 ']' 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.455 08:41:48 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:13.455 [2024-07-12 08:41:48.600539] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:13.455 [2024-07-12 08:41:48.600731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119897 ] 00:15:13.712 [2024-07-12 08:41:48.768013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:13.970 [2024-07-12 08:41:48.986997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.970 [2024-07-12 08:41:48.986993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.536 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.536 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:15:14.536 08:41:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:15:14.536 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.536 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:14.793 Malloc_QD 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:14.794 [ 00:15:14.794 { 00:15:14.794 "name": "Malloc_QD", 00:15:14.794 "aliases": [ 00:15:14.794 "ecd62d61-0cf7-4098-882c-f4acd3975e26" 00:15:14.794 ], 00:15:14.794 "product_name": "Malloc disk", 00:15:14.794 "block_size": 512, 00:15:14.794 "num_blocks": 262144, 00:15:14.794 "uuid": "ecd62d61-0cf7-4098-882c-f4acd3975e26", 00:15:14.794 "assigned_rate_limits": { 00:15:14.794 "rw_ios_per_sec": 0, 00:15:14.794 "rw_mbytes_per_sec": 0, 00:15:14.794 "r_mbytes_per_sec": 0, 00:15:14.794 "w_mbytes_per_sec": 0 00:15:14.794 }, 00:15:14.794 "claimed": false, 00:15:14.794 "zoned": false, 00:15:14.794 "supported_io_types": { 00:15:14.794 "read": true, 00:15:14.794 "write": true, 00:15:14.794 "unmap": true, 00:15:14.794 "flush": true, 00:15:14.794 "reset": true, 00:15:14.794 "nvme_admin": false, 00:15:14.794 "nvme_io": false, 00:15:14.794 "nvme_io_md": false, 00:15:14.794 "write_zeroes": true, 00:15:14.794 "zcopy": true, 00:15:14.794 "get_zone_info": false, 00:15:14.794 "zone_management": false, 00:15:14.794 "zone_append": false, 00:15:14.794 "compare": false, 00:15:14.794 "compare_and_write": false, 00:15:14.794 "abort": true, 00:15:14.794 "seek_hole": false, 00:15:14.794 "seek_data": false, 00:15:14.794 "copy": true, 00:15:14.794 "nvme_iov_md": false 00:15:14.794 }, 00:15:14.794 "memory_domains": [ 00:15:14.794 { 00:15:14.794 "dma_device_id": "system", 00:15:14.794 "dma_device_type": 1 00:15:14.794 }, 00:15:14.794 { 00:15:14.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.794 "dma_device_type": 2 00:15:14.794 } 00:15:14.794 ], 00:15:14.794 "driver_specific": {} 00:15:14.794 } 00:15:14.794 ] 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:15:14.794 08:41:49 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:14.794 Running I/O for 5 seconds... 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.693 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:15:16.693 "tick_rate": 2200000000, 00:15:16.693 "ticks": 1850896113633, 00:15:16.693 "bdevs": [ 00:15:16.693 { 00:15:16.693 "name": "Malloc_QD", 00:15:16.693 "bytes_read": 802198016, 00:15:16.693 "num_read_ops": 195843, 00:15:16.693 "bytes_written": 0, 00:15:16.693 "num_write_ops": 0, 00:15:16.694 "bytes_unmapped": 0, 00:15:16.694 "num_unmap_ops": 0, 00:15:16.694 "bytes_copied": 0, 00:15:16.694 "num_copy_ops": 0, 00:15:16.694 "read_latency_ticks": 2158320507770, 00:15:16.694 "max_read_latency_ticks": 16835361, 00:15:16.694 "min_read_latency_ticks": 346795, 00:15:16.694 "write_latency_ticks": 0, 00:15:16.694 "max_write_latency_ticks": 0, 00:15:16.694 "min_write_latency_ticks": 0, 00:15:16.694 "unmap_latency_ticks": 0, 00:15:16.694 "max_unmap_latency_ticks": 0, 00:15:16.694 "min_unmap_latency_ticks": 0, 00:15:16.694 "copy_latency_ticks": 0, 00:15:16.694 "max_copy_latency_ticks": 0, 00:15:16.694 "min_copy_latency_ticks": 0, 00:15:16.694 "io_error": {}, 00:15:16.694 "queue_depth_polling_period": 10, 00:15:16.694 "queue_depth": 512, 00:15:16.694 "io_time": 20, 00:15:16.694 "weighted_io_time": 10240 00:15:16.694 } 00:15:16.694 ] 00:15:16.694 }' 00:15:16.694 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:16.953 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:15:16.953 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:15:16.953 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:15:16.953 08:41:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:16.953 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.953 08:41:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:16.953 00:15:16.953 Latency(us) 00:15:16.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.953 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:16.954 Malloc_QD : 2.01 51059.20 199.45 0.00 0.00 5000.63 1273.48 7685.59 00:15:16.954 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:16.954 Malloc_QD : 2.01 50856.50 198.66 0.00 0.00 5020.97 912.29 7149.38 00:15:16.954 =================================================================================================================== 00:15:16.954 Total : 101915.70 398.11 0.00 0.00 5010.79 912.29 7685.59 00:15:16.954 0 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 119897 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 119897 ']' 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 119897 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119897 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.954 killing process with pid 119897 00:15:16.954 Received shutdown signal, test time was about 2.142747 seconds 00:15:16.954 00:15:16.954 Latency(us) 00:15:16.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.954 =================================================================================================================== 00:15:16.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119897' 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 119897 00:15:16.954 08:41:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 119897 00:15:18.366 08:41:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:15:18.366 ************************************ 00:15:18.366 END TEST bdev_qd_sampling 00:15:18.366 00:15:18.366 real 0m4.849s 00:15:18.366 user 0m9.155s 00:15:18.366 sys 0m0.340s 00:15:18.366 08:41:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.366 08:41:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:15:18.366 ************************************ 00:15:18.366 08:41:53 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:18.366 08:41:53 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:15:18.366 08:41:53 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:18.366 08:41:53 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.366 08:41:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:18.366 ************************************ 00:15:18.366 START TEST bdev_error 00:15:18.366 ************************************ 00:15:18.366 08:41:53 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:15:18.366 08:41:53 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:15:18.366 08:41:53 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:15:18.366 08:41:53 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:15:18.366 08:41:53 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=119991 00:15:18.366 08:41:53 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 119991' 00:15:18.366 Process error testing pid: 119991 00:15:18.366 08:41:53 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 119991 00:15:18.366 08:41:53 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:18.366 08:41:53 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 119991 ']' 00:15:18.366 08:41:53 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.366 08:41:53 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.366 08:41:53 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.366 08:41:53 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.366 08:41:53 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:18.366 [2024-07-12 08:41:53.490303] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:18.366 [2024-07-12 08:41:53.490489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119991 ] 00:15:18.623 [2024-07-12 08:41:53.649006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.880 [2024-07-12 08:41:53.898464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.446 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.446 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:15:19.446 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:19.446 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.446 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 Dev_1 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 [ 00:15:19.705 { 00:15:19.705 "name": "Dev_1", 00:15:19.705 "aliases": [ 00:15:19.705 "9877eeef-502b-4bdc-8e1a-308bb56b0a14" 00:15:19.705 ], 00:15:19.705 "product_name": "Malloc disk", 00:15:19.705 "block_size": 512, 00:15:19.705 "num_blocks": 262144, 00:15:19.705 "uuid": "9877eeef-502b-4bdc-8e1a-308bb56b0a14", 00:15:19.705 "assigned_rate_limits": { 00:15:19.705 "rw_ios_per_sec": 0, 00:15:19.705 "rw_mbytes_per_sec": 0, 00:15:19.705 "r_mbytes_per_sec": 0, 00:15:19.705 "w_mbytes_per_sec": 0 00:15:19.705 }, 00:15:19.705 "claimed": false, 00:15:19.705 "zoned": false, 00:15:19.705 "supported_io_types": { 00:15:19.705 "read": true, 00:15:19.705 "write": true, 00:15:19.705 "unmap": true, 00:15:19.705 "flush": true, 00:15:19.705 "reset": true, 00:15:19.705 "nvme_admin": false, 00:15:19.705 "nvme_io": false, 00:15:19.705 "nvme_io_md": false, 00:15:19.705 "write_zeroes": true, 00:15:19.705 "zcopy": true, 00:15:19.705 "get_zone_info": false, 00:15:19.705 "zone_management": false, 00:15:19.705 "zone_append": false, 00:15:19.705 "compare": false, 00:15:19.705 "compare_and_write": false, 00:15:19.705 "abort": true, 00:15:19.705 "seek_hole": false, 00:15:19.705 "seek_data": false, 00:15:19.705 "copy": true, 00:15:19.705 "nvme_iov_md": false 00:15:19.705 }, 00:15:19.705 "memory_domains": [ 00:15:19.705 { 00:15:19.705 "dma_device_id": "system", 00:15:19.705 "dma_device_type": 1 00:15:19.705 }, 00:15:19.705 { 00:15:19.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.705 "dma_device_type": 2 00:15:19.705 } 00:15:19.705 ], 00:15:19.705 "driver_specific": {} 00:15:19.705 } 00:15:19.705 ] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:19.705 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 true 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 Dev_2 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 [ 00:15:19.705 { 00:15:19.705 "name": "Dev_2", 00:15:19.705 "aliases": [ 00:15:19.705 "f26553e0-3aa1-406d-bad5-b635ad32f9f6" 00:15:19.705 ], 00:15:19.705 "product_name": "Malloc disk", 00:15:19.705 "block_size": 512, 00:15:19.705 "num_blocks": 262144, 00:15:19.705 "uuid": "f26553e0-3aa1-406d-bad5-b635ad32f9f6", 00:15:19.705 "assigned_rate_limits": { 00:15:19.705 "rw_ios_per_sec": 0, 00:15:19.705 "rw_mbytes_per_sec": 0, 00:15:19.705 "r_mbytes_per_sec": 0, 00:15:19.705 "w_mbytes_per_sec": 0 00:15:19.705 }, 00:15:19.705 "claimed": false, 00:15:19.705 "zoned": false, 00:15:19.705 "supported_io_types": { 00:15:19.705 "read": true, 00:15:19.705 "write": true, 00:15:19.705 "unmap": true, 00:15:19.705 "flush": true, 00:15:19.705 "reset": true, 00:15:19.705 "nvme_admin": false, 00:15:19.705 "nvme_io": false, 00:15:19.705 "nvme_io_md": false, 00:15:19.705 "write_zeroes": true, 00:15:19.705 "zcopy": true, 00:15:19.705 "get_zone_info": false, 00:15:19.705 "zone_management": false, 00:15:19.705 "zone_append": false, 00:15:19.705 "compare": false, 00:15:19.705 "compare_and_write": false, 00:15:19.705 "abort": true, 00:15:19.705 "seek_hole": false, 00:15:19.705 "seek_data": false, 00:15:19.705 "copy": true, 00:15:19.705 "nvme_iov_md": false 00:15:19.705 }, 00:15:19.705 "memory_domains": [ 00:15:19.705 { 00:15:19.705 "dma_device_id": "system", 00:15:19.705 "dma_device_type": 1 00:15:19.705 }, 00:15:19.705 { 00:15:19.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.705 "dma_device_type": 2 00:15:19.705 } 00:15:19.705 ], 00:15:19.705 "driver_specific": {} 00:15:19.705 } 00:15:19.705 ] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:19.705 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 08:41:54 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.705 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:15:19.705 08:41:54 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:19.964 Running I/O for 5 seconds... 00:15:20.899 08:41:55 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 119991 00:15:20.899 08:41:55 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 119991' 00:15:20.899 Process is existed as continue on error is set. Pid: 119991 00:15:20.899 08:41:55 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:20.899 08:41:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.899 08:41:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:20.899 08:41:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.899 08:41:55 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:20.899 08:41:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.899 08:41:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:20.899 Timeout while waiting for response: 00:15:20.899 00:15:20.899 00:15:21.157 08:41:56 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.157 08:41:56 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:15:25.409 00:15:25.409 Latency(us) 00:15:25.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.409 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:25.409 EE_Dev_1 : 0.92 35129.80 137.23 5.42 0.00 451.94 167.56 808.03 00:15:25.409 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:25.409 Dev_2 : 5.00 70119.39 273.90 0.00 0.00 224.77 67.96 316479.30 00:15:25.409 =================================================================================================================== 00:15:25.409 Total : 105249.20 411.13 5.42 0.00 243.98 67.96 316479.30 00:15:26.339 08:42:01 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 119991 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 119991 ']' 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 119991 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119991 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:26.339 killing process with pid 119991 00:15:26.339 Received shutdown signal, test time was about 5.000000 seconds 00:15:26.339 00:15:26.339 Latency(us) 00:15:26.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.339 =================================================================================================================== 00:15:26.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119991' 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 119991 00:15:26.339 08:42:01 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 119991 00:15:27.713 Process error testing pid: 120128 00:15:27.713 08:42:02 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=120128 00:15:27.713 08:42:02 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 120128' 00:15:27.713 08:42:02 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 120128 00:15:27.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.713 08:42:02 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:15:27.713 08:42:02 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 120128 ']' 00:15:27.713 08:42:02 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.713 08:42:02 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.713 08:42:02 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.713 08:42:02 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.713 08:42:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:27.713 [2024-07-12 08:42:02.717948] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:27.713 [2024-07-12 08:42:02.718192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120128 ] 00:15:27.713 [2024-07-12 08:42:02.883680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.973 [2024-07-12 08:42:03.131390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:15:28.906 08:42:03 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 Dev_1 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.906 08:42:03 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 [ 00:15:28.906 { 00:15:28.906 "name": "Dev_1", 00:15:28.906 "aliases": [ 00:15:28.906 "9d27c88c-771a-40c6-9f66-b8efb3f8097e" 00:15:28.906 ], 00:15:28.906 "product_name": "Malloc disk", 00:15:28.906 "block_size": 512, 00:15:28.906 "num_blocks": 262144, 00:15:28.906 "uuid": "9d27c88c-771a-40c6-9f66-b8efb3f8097e", 00:15:28.906 "assigned_rate_limits": { 00:15:28.906 "rw_ios_per_sec": 0, 00:15:28.906 "rw_mbytes_per_sec": 0, 00:15:28.906 "r_mbytes_per_sec": 0, 00:15:28.906 "w_mbytes_per_sec": 0 00:15:28.906 }, 00:15:28.906 "claimed": false, 00:15:28.906 "zoned": false, 00:15:28.906 "supported_io_types": { 00:15:28.906 "read": true, 00:15:28.906 "write": true, 00:15:28.906 "unmap": true, 00:15:28.906 "flush": true, 00:15:28.906 "reset": true, 00:15:28.906 "nvme_admin": false, 00:15:28.906 "nvme_io": false, 00:15:28.906 "nvme_io_md": false, 00:15:28.906 "write_zeroes": true, 00:15:28.906 "zcopy": true, 00:15:28.906 "get_zone_info": false, 00:15:28.906 "zone_management": false, 00:15:28.906 "zone_append": false, 00:15:28.906 "compare": false, 00:15:28.906 "compare_and_write": false, 00:15:28.906 "abort": true, 00:15:28.906 "seek_hole": false, 00:15:28.906 "seek_data": false, 00:15:28.906 "copy": true, 00:15:28.906 "nvme_iov_md": false 00:15:28.906 }, 00:15:28.906 "memory_domains": [ 00:15:28.906 { 00:15:28.906 "dma_device_id": "system", 00:15:28.906 "dma_device_type": 1 00:15:28.906 }, 00:15:28.906 { 00:15:28.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.906 "dma_device_type": 2 00:15:28.906 } 00:15:28.906 ], 00:15:28.906 "driver_specific": {} 00:15:28.906 } 00:15:28.906 ] 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:28.906 08:42:03 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 true 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.906 08:42:03 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.906 08:42:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 Dev_2 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.906 08:42:04 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.906 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:28.906 [ 00:15:28.906 { 00:15:28.906 "name": "Dev_2", 00:15:28.906 "aliases": [ 00:15:28.906 "9bca73e5-b963-4656-8ed5-ad777ae86aa6" 00:15:28.906 ], 00:15:28.906 "product_name": "Malloc disk", 00:15:28.906 "block_size": 512, 00:15:28.906 "num_blocks": 262144, 00:15:28.906 "uuid": "9bca73e5-b963-4656-8ed5-ad777ae86aa6", 00:15:28.906 "assigned_rate_limits": { 00:15:28.906 "rw_ios_per_sec": 0, 00:15:28.906 "rw_mbytes_per_sec": 0, 00:15:28.906 "r_mbytes_per_sec": 0, 00:15:28.906 "w_mbytes_per_sec": 0 00:15:28.906 }, 00:15:28.906 "claimed": false, 00:15:28.906 "zoned": false, 00:15:28.906 "supported_io_types": { 00:15:28.906 "read": true, 00:15:28.906 "write": true, 00:15:28.906 "unmap": true, 00:15:28.906 "flush": true, 00:15:28.906 "reset": true, 00:15:28.906 "nvme_admin": false, 00:15:28.907 "nvme_io": false, 00:15:28.907 "nvme_io_md": false, 00:15:28.907 "write_zeroes": true, 00:15:28.907 "zcopy": true, 00:15:28.907 "get_zone_info": false, 00:15:28.907 "zone_management": false, 00:15:28.907 "zone_append": false, 00:15:28.907 "compare": false, 00:15:28.907 "compare_and_write": false, 00:15:28.907 "abort": true, 00:15:28.907 "seek_hole": false, 00:15:28.907 "seek_data": false, 00:15:28.907 "copy": true, 00:15:28.907 "nvme_iov_md": false 00:15:28.907 }, 00:15:28.907 "memory_domains": [ 00:15:28.907 { 00:15:28.907 "dma_device_id": "system", 00:15:28.907 "dma_device_type": 1 00:15:28.907 }, 00:15:28.907 { 00:15:28.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.907 "dma_device_type": 2 00:15:28.907 } 00:15:28.907 ], 00:15:28.907 "driver_specific": {} 00:15:28.907 } 00:15:28.907 ] 00:15:28.907 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.907 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:15:28.907 08:42:04 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:28.907 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.907 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.179 08:42:04 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 120128 00:15:29.179 08:42:04 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 120128 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:29.179 08:42:04 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 120128 00:15:29.179 Running I/O for 5 seconds... 00:15:29.179 task offset: 139616 on job bdev=EE_Dev_1 fails 00:15:29.179 00:15:29.179 Latency(us) 00:15:29.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.179 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:29.179 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:15:29.179 EE_Dev_1 : 0.00 26538.00 103.66 6031.36 0.00 397.52 158.25 726.11 00:15:29.179 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:29.179 Dev_2 : 0.00 18018.02 70.38 0.00 0.00 625.28 147.08 1146.88 00:15:29.179 =================================================================================================================== 00:15:29.179 Total : 44556.02 174.05 6031.36 0.00 521.05 147.08 1146.88 00:15:29.179 [2024-07-12 08:42:04.230994] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:29.179 request: 00:15:29.179 { 00:15:29.179 "method": "perform_tests", 00:15:29.179 "req_id": 1 00:15:29.179 } 00:15:29.179 Got JSON-RPC error response 00:15:29.179 response: 00:15:29.179 { 00:15:29.179 "code": -32603, 00:15:29.179 "message": "bdevperf failed with error Operation not permitted" 00:15:29.179 } 00:15:31.080 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:15:31.081 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:31.081 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:15:31.081 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:15:31.081 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:15:31.081 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:31.081 00:15:31.081 real 0m12.647s 00:15:31.081 user 0m13.036s 00:15:31.081 sys 0m0.785s 00:15:31.081 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.081 08:42:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:15:31.081 ************************************ 00:15:31.081 END TEST bdev_error 00:15:31.081 ************************************ 00:15:31.081 08:42:06 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:31.081 08:42:06 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:15:31.081 08:42:06 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.081 08:42:06 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.081 08:42:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:31.081 ************************************ 00:15:31.081 START TEST bdev_stat 00:15:31.081 ************************************ 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=120194 00:15:31.081 Process Bdev IO statistics testing pid: 120194 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 120194' 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 120194 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 120194 ']' 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:15:31.081 08:42:06 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:31.081 [2024-07-12 08:42:06.195254] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:31.081 [2024-07-12 08:42:06.195777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120194 ] 00:15:31.337 [2024-07-12 08:42:06.372605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:31.594 [2024-07-12 08:42:06.601869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.594 [2024-07-12 08:42:06.601876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.158 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.158 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:15:32.158 08:42:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:15:32.158 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.158 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:32.416 Malloc_STAT 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:32.416 [ 00:15:32.416 { 00:15:32.416 "name": "Malloc_STAT", 00:15:32.416 "aliases": [ 00:15:32.416 "ade21f79-35ec-43fd-b1da-a7dba5ae98c5" 00:15:32.416 ], 00:15:32.416 "product_name": "Malloc disk", 00:15:32.416 "block_size": 512, 00:15:32.416 "num_blocks": 262144, 00:15:32.416 "uuid": "ade21f79-35ec-43fd-b1da-a7dba5ae98c5", 00:15:32.416 "assigned_rate_limits": { 00:15:32.416 "rw_ios_per_sec": 0, 00:15:32.416 "rw_mbytes_per_sec": 0, 00:15:32.416 "r_mbytes_per_sec": 0, 00:15:32.416 "w_mbytes_per_sec": 0 00:15:32.416 }, 00:15:32.416 "claimed": false, 00:15:32.416 "zoned": false, 00:15:32.416 "supported_io_types": { 00:15:32.416 "read": true, 00:15:32.416 "write": true, 00:15:32.416 "unmap": true, 00:15:32.416 "flush": true, 00:15:32.416 "reset": true, 00:15:32.416 "nvme_admin": false, 00:15:32.416 "nvme_io": false, 00:15:32.416 "nvme_io_md": false, 00:15:32.416 "write_zeroes": true, 00:15:32.416 "zcopy": true, 00:15:32.416 "get_zone_info": false, 00:15:32.416 "zone_management": false, 00:15:32.416 "zone_append": false, 00:15:32.416 "compare": false, 00:15:32.416 "compare_and_write": false, 00:15:32.416 "abort": true, 00:15:32.416 "seek_hole": false, 00:15:32.416 "seek_data": false, 00:15:32.416 "copy": true, 00:15:32.416 "nvme_iov_md": false 00:15:32.416 }, 00:15:32.416 "memory_domains": [ 00:15:32.416 { 00:15:32.416 "dma_device_id": "system", 00:15:32.416 "dma_device_type": 1 00:15:32.416 }, 00:15:32.416 { 00:15:32.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.416 "dma_device_type": 2 00:15:32.416 } 00:15:32.416 ], 00:15:32.416 "driver_specific": {} 00:15:32.416 } 00:15:32.416 ] 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:15:32.416 08:42:07 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:32.416 Running I/O for 10 seconds... 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:15:34.393 "tick_rate": 2200000000, 00:15:34.393 "ticks": 1889642608022, 00:15:34.393 "bdevs": [ 00:15:34.393 { 00:15:34.393 "name": "Malloc_STAT", 00:15:34.393 "bytes_read": 806392320, 00:15:34.393 "num_read_ops": 196867, 00:15:34.393 "bytes_written": 0, 00:15:34.393 "num_write_ops": 0, 00:15:34.393 "bytes_unmapped": 0, 00:15:34.393 "num_unmap_ops": 0, 00:15:34.393 "bytes_copied": 0, 00:15:34.393 "num_copy_ops": 0, 00:15:34.393 "read_latency_ticks": 2109973682592, 00:15:34.393 "max_read_latency_ticks": 15220634, 00:15:34.393 "min_read_latency_ticks": 376219, 00:15:34.393 "write_latency_ticks": 0, 00:15:34.393 "max_write_latency_ticks": 0, 00:15:34.393 "min_write_latency_ticks": 0, 00:15:34.393 "unmap_latency_ticks": 0, 00:15:34.393 "max_unmap_latency_ticks": 0, 00:15:34.393 "min_unmap_latency_ticks": 0, 00:15:34.393 "copy_latency_ticks": 0, 00:15:34.393 "max_copy_latency_ticks": 0, 00:15:34.393 "min_copy_latency_ticks": 0, 00:15:34.393 "io_error": {} 00:15:34.393 } 00:15:34.393 ] 00:15:34.393 }' 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=196867 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:15:34.393 "tick_rate": 2200000000, 00:15:34.393 "ticks": 1889813763264, 00:15:34.393 "name": "Malloc_STAT", 00:15:34.393 "channels": [ 00:15:34.393 { 00:15:34.393 "thread_id": 2, 00:15:34.393 "bytes_read": 411041792, 00:15:34.393 "num_read_ops": 100352, 00:15:34.393 "bytes_written": 0, 00:15:34.393 "num_write_ops": 0, 00:15:34.393 "bytes_unmapped": 0, 00:15:34.393 "num_unmap_ops": 0, 00:15:34.393 "bytes_copied": 0, 00:15:34.393 "num_copy_ops": 0, 00:15:34.393 "read_latency_ticks": 1097654821749, 00:15:34.393 "max_read_latency_ticks": 15220634, 00:15:34.393 "min_read_latency_ticks": 8429259, 00:15:34.393 "write_latency_ticks": 0, 00:15:34.393 "max_write_latency_ticks": 0, 00:15:34.393 "min_write_latency_ticks": 0, 00:15:34.393 "unmap_latency_ticks": 0, 00:15:34.393 "max_unmap_latency_ticks": 0, 00:15:34.393 "min_unmap_latency_ticks": 0, 00:15:34.393 "copy_latency_ticks": 0, 00:15:34.393 "max_copy_latency_ticks": 0, 00:15:34.393 "min_copy_latency_ticks": 0 00:15:34.393 }, 00:15:34.393 { 00:15:34.393 "thread_id": 3, 00:15:34.393 "bytes_read": 429916160, 00:15:34.393 "num_read_ops": 104960, 00:15:34.393 "bytes_written": 0, 00:15:34.393 "num_write_ops": 0, 00:15:34.393 "bytes_unmapped": 0, 00:15:34.393 "num_unmap_ops": 0, 00:15:34.393 "bytes_copied": 0, 00:15:34.393 "num_copy_ops": 0, 00:15:34.393 "read_latency_ticks": 1100662343927, 00:15:34.393 "max_read_latency_ticks": 13654488, 00:15:34.393 "min_read_latency_ticks": 8397315, 00:15:34.393 "write_latency_ticks": 0, 00:15:34.393 "max_write_latency_ticks": 0, 00:15:34.393 "min_write_latency_ticks": 0, 00:15:34.393 "unmap_latency_ticks": 0, 00:15:34.393 "max_unmap_latency_ticks": 0, 00:15:34.393 "min_unmap_latency_ticks": 0, 00:15:34.393 "copy_latency_ticks": 0, 00:15:34.393 "max_copy_latency_ticks": 0, 00:15:34.393 "min_copy_latency_ticks": 0 00:15:34.393 } 00:15:34.393 ] 00:15:34.393 }' 00:15:34.393 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=100352 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=100352 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=104960 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=205312 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:15:34.650 "tick_rate": 2200000000, 00:15:34.650 "ticks": 1890109737831, 00:15:34.650 "bdevs": [ 00:15:34.650 { 00:15:34.650 "name": "Malloc_STAT", 00:15:34.650 "bytes_read": 899715584, 00:15:34.650 "num_read_ops": 219651, 00:15:34.650 "bytes_written": 0, 00:15:34.650 "num_write_ops": 0, 00:15:34.650 "bytes_unmapped": 0, 00:15:34.650 "num_unmap_ops": 0, 00:15:34.650 "bytes_copied": 0, 00:15:34.650 "num_copy_ops": 0, 00:15:34.650 "read_latency_ticks": 2348432994111, 00:15:34.650 "max_read_latency_ticks": 15220634, 00:15:34.650 "min_read_latency_ticks": 376219, 00:15:34.650 "write_latency_ticks": 0, 00:15:34.650 "max_write_latency_ticks": 0, 00:15:34.650 "min_write_latency_ticks": 0, 00:15:34.650 "unmap_latency_ticks": 0, 00:15:34.650 "max_unmap_latency_ticks": 0, 00:15:34.650 "min_unmap_latency_ticks": 0, 00:15:34.650 "copy_latency_ticks": 0, 00:15:34.650 "max_copy_latency_ticks": 0, 00:15:34.650 "min_copy_latency_ticks": 0, 00:15:34.650 "io_error": {} 00:15:34.650 } 00:15:34.650 ] 00:15:34.650 }' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=219651 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 205312 -lt 196867 ']' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 205312 -gt 219651 ']' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:34.650 00:15:34.650 Latency(us) 00:15:34.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.650 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:34.650 Malloc_STAT : 2.16 51528.26 201.28 0.00 0.00 4955.39 1556.48 6940.86 00:15:34.650 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:34.650 Malloc_STAT : 2.16 53660.48 209.61 0.00 0.00 4758.98 1444.77 6225.92 00:15:34.650 =================================================================================================================== 00:15:34.650 Total : 105188.74 410.89 0.00 0.00 4855.19 1444.77 6940.86 00:15:34.650 0 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 120194 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 120194 ']' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 120194 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.650 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120194 00:15:34.908 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:34.908 killing process with pid 120194 00:15:34.908 Received shutdown signal, test time was about 2.292417 seconds 00:15:34.908 00:15:34.908 Latency(us) 00:15:34.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.908 =================================================================================================================== 00:15:34.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:34.908 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:34.908 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120194' 00:15:34.908 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 120194 00:15:34.908 08:42:09 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 120194 00:15:36.284 08:42:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:15:36.284 00:15:36.284 real 0m5.060s 00:15:36.284 user 0m9.725s 00:15:36.284 sys 0m0.399s 00:15:36.284 ************************************ 00:15:36.284 END TEST bdev_stat 00:15:36.284 ************************************ 00:15:36.284 08:42:11 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.284 08:42:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:15:36.284 08:42:11 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:15:36.284 08:42:11 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:15:36.284 ************************************ 00:15:36.284 END TEST blockdev_general 00:15:36.284 ************************************ 00:15:36.284 00:15:36.284 real 2m30.946s 00:15:36.284 user 6m5.755s 00:15:36.284 sys 0m21.303s 00:15:36.284 08:42:11 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.284 08:42:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:15:36.284 08:42:11 -- common/autotest_common.sh@1142 -- # return 0 00:15:36.284 08:42:11 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:36.284 08:42:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:36.284 08:42:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.284 08:42:11 -- common/autotest_common.sh@10 -- # set +x 00:15:36.284 ************************************ 00:15:36.284 START TEST bdev_raid 00:15:36.284 ************************************ 00:15:36.284 08:42:11 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:36.284 * Looking for test storage... 00:15:36.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:36.284 08:42:11 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:15:36.284 08:42:11 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:15:36.284 08:42:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:36.284 08:42:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.284 08:42:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.284 ************************************ 00:15:36.284 START TEST raid_function_test_raid0 00:15:36.284 ************************************ 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=120375 00:15:36.285 Process raid pid: 120375 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 120375' 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 120375 /var/tmp/spdk-raid.sock 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 120375 ']' 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:15:36.285 08:42:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:36.285 [2024-07-12 08:42:11.433003] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:36.285 [2024-07-12 08:42:11.433409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.550 [2024-07-12 08:42:11.591438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.836 [2024-07-12 08:42:11.806669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.836 [2024-07-12 08:42:12.007140] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.402 08:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.402 08:42:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:15:37.402 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:15:37.402 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:15:37.402 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:37.402 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:15:37.402 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:37.660 [2024-07-12 08:42:12.759677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:37.660 [2024-07-12 08:42:12.761810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:37.660 [2024-07-12 08:42:12.761902] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:37.660 [2024-07-12 08:42:12.761916] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:37.660 [2024-07-12 08:42:12.762094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:37.660 [2024-07-12 08:42:12.762510] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:37.660 [2024-07-12 08:42:12.762537] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007580 00:15:37.660 [2024-07-12 08:42:12.762713] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.660 Base_1 00:15:37.660 Base_2 00:15:37.660 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:37.660 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:37.660 08:42:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.927 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:38.192 [2024-07-12 08:42:13.359867] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:38.192 /dev/nbd0 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.450 1+0 records in 00:15:38.450 1+0 records out 00:15:38.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027308 s, 15.0 MB/s 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.450 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.451 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:38.451 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:38.451 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:38.708 { 00:15:38.708 "nbd_device": "/dev/nbd0", 00:15:38.708 "bdev_name": "raid" 00:15:38.708 } 00:15:38.708 ]' 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:38.708 { 00:15:38.708 "nbd_device": "/dev/nbd0", 00:15:38.708 "bdev_name": "raid" 00:15:38.708 } 00:15:38.708 ]' 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:15:38.708 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=(0 1028 321) 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=(128 2035 456) 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:15:38.709 4096+0 records in 00:15:38.709 4096+0 records out 00:15:38.709 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0292705 s, 71.6 MB/s 00:15:38.709 08:42:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:38.967 4096+0 records in 00:15:38.967 4096+0 records out 00:15:38.967 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.278748 s, 7.5 MB/s 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:38.967 128+0 records in 00:15:38.967 128+0 records out 00:15:38.967 65536 bytes (66 kB, 64 KiB) copied, 0.000673749 s, 97.3 MB/s 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:38.967 2035+0 records in 00:15:38.967 2035+0 records out 00:15:38.967 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00632019 s, 165 MB/s 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:38.967 456+0 records in 00:15:38.967 456+0 records out 00:15:38.967 233472 bytes (233 kB, 228 KiB) copied, 0.00149592 s, 156 MB/s 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.967 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:39.225 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.225 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.225 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.225 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.225 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.225 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.225 [2024-07-12 08:42:14.391491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.225 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:15:39.226 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.226 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:39.226 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:39.226 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:39.484 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:39.484 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:39.484 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 120375 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 120375 ']' 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 120375 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120375 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:39.742 killing process with pid 120375 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120375' 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 120375 00:15:39.742 [2024-07-12 08:42:14.758991] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.742 [2024-07-12 08:42:14.759119] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.742 [2024-07-12 08:42:14.759176] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.742 [2024-07-12 08:42:14.759200] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid, state offline 00:15:39.742 08:42:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 120375 00:15:39.742 [2024-07-12 08:42:14.929322] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.117 08:42:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:15:41.117 ************************************ 00:15:41.117 END TEST raid_function_test_raid0 00:15:41.117 ************************************ 00:15:41.117 00:15:41.117 real 0m4.690s 00:15:41.117 user 0m6.145s 00:15:41.117 sys 0m0.941s 00:15:41.117 08:42:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.117 08:42:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:15:41.117 08:42:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:41.117 08:42:16 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:15:41.117 08:42:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:41.117 08:42:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.117 08:42:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.117 ************************************ 00:15:41.117 START TEST raid_function_test_concat 00:15:41.117 ************************************ 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=120539 00:15:41.117 Process raid pid: 120539 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 120539' 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 120539 /var/tmp/spdk-raid.sock 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 120539 ']' 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.117 08:42:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:15:41.117 [2024-07-12 08:42:16.165070] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:41.117 [2024-07-12 08:42:16.165261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.375 [2024-07-12 08:42:16.324543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.633 [2024-07-12 08:42:16.583190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.633 [2024-07-12 08:42:16.783418] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.198 08:42:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.198 08:42:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:15:42.198 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:15:42.198 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:15:42.198 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:42.198 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:15:42.198 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:42.457 [2024-07-12 08:42:17.445485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:42.457 [2024-07-12 08:42:17.447682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:42.457 [2024-07-12 08:42:17.447790] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:42.457 [2024-07-12 08:42:17.447805] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:42.457 [2024-07-12 08:42:17.447998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:42.457 [2024-07-12 08:42:17.448417] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:42.457 [2024-07-12 08:42:17.448448] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007580 00:15:42.457 [2024-07-12 08:42:17.448633] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.457 Base_1 00:15:42.457 Base_2 00:15:42.457 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:42.457 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:42.457 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.716 08:42:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:42.973 [2024-07-12 08:42:17.981638] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:42.973 /dev/nbd0 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.973 1+0 records in 00:15:42.973 1+0 records out 00:15:42.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346199 s, 11.8 MB/s 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:42.973 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:43.231 { 00:15:43.231 "nbd_device": "/dev/nbd0", 00:15:43.231 "bdev_name": "raid" 00:15:43.231 } 00:15:43.231 ]' 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:43.231 { 00:15:43.231 "nbd_device": "/dev/nbd0", 00:15:43.231 "bdev_name": "raid" 00:15:43.231 } 00:15:43.231 ]' 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=(0 1028 321) 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=(128 2035 456) 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:15:43.231 4096+0 records in 00:15:43.231 4096+0 records out 00:15:43.231 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.021493 s, 97.6 MB/s 00:15:43.231 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:43.488 4096+0 records in 00:15:43.488 4096+0 records out 00:15:43.488 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.279696 s, 7.5 MB/s 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:43.488 128+0 records in 00:15:43.488 128+0 records out 00:15:43.488 65536 bytes (66 kB, 64 KiB) copied, 0.000371114 s, 177 MB/s 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:43.488 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:43.746 2035+0 records in 00:15:43.746 2035+0 records out 00:15:43.746 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0054392 s, 192 MB/s 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:43.746 456+0 records in 00:15:43.746 456+0 records out 00:15:43.746 233472 bytes (233 kB, 228 KiB) copied, 0.0010424 s, 224 MB/s 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.746 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:44.004 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.004 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.004 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.004 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.004 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.004 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.004 08:42:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:15:44.004 [2024-07-12 08:42:18.986922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:44.004 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 120539 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 120539 ']' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 120539 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120539 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120539' 00:15:44.262 killing process with pid 120539 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 120539 00:15:44.262 08:42:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 120539 00:15:44.262 [2024-07-12 08:42:19.406281] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.262 [2024-07-12 08:42:19.406408] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.262 [2024-07-12 08:42:19.406467] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.262 [2024-07-12 08:42:19.406486] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid, state offline 00:15:44.520 [2024-07-12 08:42:19.574165] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.891 08:42:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:15:45.891 00:15:45.891 real 0m4.578s 00:15:45.891 user 0m5.960s 00:15:45.891 sys 0m0.830s 00:15:45.891 08:42:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.891 08:42:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:15:45.891 ************************************ 00:15:45.891 END TEST raid_function_test_concat 00:15:45.891 ************************************ 00:15:45.891 08:42:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:45.891 08:42:20 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:15:45.891 08:42:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:45.891 08:42:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.891 08:42:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.891 ************************************ 00:15:45.891 START TEST raid0_resize_test 00:15:45.891 ************************************ 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=120710 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 120710' 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:45.891 Process raid pid: 120710 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 120710 /var/tmp/spdk-raid.sock 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 120710 ']' 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:45.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.891 08:42:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.891 [2024-07-12 08:42:20.808462] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:45.891 [2024-07-12 08:42:20.808671] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.891 [2024-07-12 08:42:20.977053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.149 [2024-07-12 08:42:21.193007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.407 [2024-07-12 08:42:21.393835] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.666 08:42:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.666 08:42:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:15:46.666 08:42:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:46.924 Base_1 00:15:46.924 08:42:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:47.184 Base_2 00:15:47.184 08:42:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:15:47.750 [2024-07-12 08:42:22.632797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:47.750 [2024-07-12 08:42:22.634690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:47.751 [2024-07-12 08:42:22.634757] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:47.751 [2024-07-12 08:42:22.634771] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:47.751 [2024-07-12 08:42:22.634975] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:47.751 [2024-07-12 08:42:22.635306] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:47.751 [2024-07-12 08:42:22.635321] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007580 00:15:47.751 [2024-07-12 08:42:22.635549] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.751 08:42:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:47.751 [2024-07-12 08:42:22.856832] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:47.751 [2024-07-12 08:42:22.856893] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:47.751 true 00:15:47.751 08:42:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:47.751 08:42:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:15:48.011 [2024-07-12 08:42:23.137037] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.011 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:15:48.011 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:15:48.011 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:15:48.011 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:48.268 [2024-07-12 08:42:23.368911] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:48.268 [2024-07-12 08:42:23.368964] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:48.268 [2024-07-12 08:42:23.369031] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:15:48.268 true 00:15:48.268 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:48.268 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:15:48.527 [2024-07-12 08:42:23.637130] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 120710 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 120710 ']' 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 120710 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120710 00:15:48.527 killing process with pid 120710 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120710' 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 120710 00:15:48.527 08:42:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 120710 00:15:48.527 [2024-07-12 08:42:23.670663] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.527 [2024-07-12 08:42:23.670779] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.527 [2024-07-12 08:42:23.670841] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.527 [2024-07-12 08:42:23.670853] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Raid, state offline 00:15:48.527 [2024-07-12 08:42:23.671440] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.920 ************************************ 00:15:49.920 END TEST raid0_resize_test 00:15:49.920 ************************************ 00:15:49.920 08:42:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:15:49.920 00:15:49.920 real 0m4.046s 00:15:49.920 user 0m5.930s 00:15:49.920 sys 0m0.493s 00:15:49.920 08:42:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.920 08:42:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.920 08:42:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:49.920 08:42:24 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:15:49.920 08:42:24 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:49.920 08:42:24 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:49.920 08:42:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:49.920 08:42:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.920 08:42:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.920 ************************************ 00:15:49.920 START TEST raid_state_function_test 00:15:49.920 ************************************ 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=120799 00:15:49.920 Process raid pid: 120799 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120799' 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 120799 /var/tmp/spdk-raid.sock 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 120799 ']' 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.920 08:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.920 [2024-07-12 08:42:24.904599] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:15:49.920 [2024-07-12 08:42:24.904806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.920 [2024-07-12 08:42:25.072702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.178 [2024-07-12 08:42:25.311032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.435 [2024-07-12 08:42:25.509897] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.693 08:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.693 08:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:50.693 08:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:50.951 [2024-07-12 08:42:26.129298] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.951 [2024-07-12 08:42:26.129427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.951 [2024-07-12 08:42:26.129444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.951 [2024-07-12 08:42:26.129475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.209 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.467 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.468 "name": "Existed_Raid", 00:15:51.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.468 "strip_size_kb": 64, 00:15:51.468 "state": "configuring", 00:15:51.468 "raid_level": "raid0", 00:15:51.468 "superblock": false, 00:15:51.468 "num_base_bdevs": 2, 00:15:51.468 "num_base_bdevs_discovered": 0, 00:15:51.468 "num_base_bdevs_operational": 2, 00:15:51.468 "base_bdevs_list": [ 00:15:51.468 { 00:15:51.468 "name": "BaseBdev1", 00:15:51.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.468 "is_configured": false, 00:15:51.468 "data_offset": 0, 00:15:51.468 "data_size": 0 00:15:51.468 }, 00:15:51.468 { 00:15:51.468 "name": "BaseBdev2", 00:15:51.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.468 "is_configured": false, 00:15:51.468 "data_offset": 0, 00:15:51.468 "data_size": 0 00:15:51.468 } 00:15:51.468 ] 00:15:51.468 }' 00:15:51.468 08:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.468 08:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.032 08:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.289 [2024-07-12 08:42:27.357391] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.289 [2024-07-12 08:42:27.357441] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:52.289 08:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:52.547 [2024-07-12 08:42:27.629469] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.547 [2024-07-12 08:42:27.629556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.547 [2024-07-12 08:42:27.629571] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.547 [2024-07-12 08:42:27.629600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.547 08:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:52.804 [2024-07-12 08:42:27.936930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.804 BaseBdev1 00:15:52.804 08:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:52.804 08:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:52.804 08:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:52.804 08:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:52.804 08:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:52.804 08:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:52.804 08:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.061 08:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.319 [ 00:15:53.319 { 00:15:53.319 "name": "BaseBdev1", 00:15:53.319 "aliases": [ 00:15:53.319 "626621fd-c8f9-4887-bcda-08b0166b509e" 00:15:53.319 ], 00:15:53.319 "product_name": "Malloc disk", 00:15:53.319 "block_size": 512, 00:15:53.319 "num_blocks": 65536, 00:15:53.319 "uuid": "626621fd-c8f9-4887-bcda-08b0166b509e", 00:15:53.319 "assigned_rate_limits": { 00:15:53.319 "rw_ios_per_sec": 0, 00:15:53.319 "rw_mbytes_per_sec": 0, 00:15:53.319 "r_mbytes_per_sec": 0, 00:15:53.319 "w_mbytes_per_sec": 0 00:15:53.319 }, 00:15:53.319 "claimed": true, 00:15:53.319 "claim_type": "exclusive_write", 00:15:53.319 "zoned": false, 00:15:53.319 "supported_io_types": { 00:15:53.319 "read": true, 00:15:53.319 "write": true, 00:15:53.319 "unmap": true, 00:15:53.319 "flush": true, 00:15:53.319 "reset": true, 00:15:53.319 "nvme_admin": false, 00:15:53.319 "nvme_io": false, 00:15:53.319 "nvme_io_md": false, 00:15:53.319 "write_zeroes": true, 00:15:53.319 "zcopy": true, 00:15:53.319 "get_zone_info": false, 00:15:53.319 "zone_management": false, 00:15:53.319 "zone_append": false, 00:15:53.319 "compare": false, 00:15:53.319 "compare_and_write": false, 00:15:53.319 "abort": true, 00:15:53.319 "seek_hole": false, 00:15:53.319 "seek_data": false, 00:15:53.319 "copy": true, 00:15:53.319 "nvme_iov_md": false 00:15:53.319 }, 00:15:53.319 "memory_domains": [ 00:15:53.319 { 00:15:53.319 "dma_device_id": "system", 00:15:53.319 "dma_device_type": 1 00:15:53.319 }, 00:15:53.319 { 00:15:53.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.319 "dma_device_type": 2 00:15:53.319 } 00:15:53.319 ], 00:15:53.319 "driver_specific": {} 00:15:53.319 } 00:15:53.319 ] 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.319 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.578 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.578 "name": "Existed_Raid", 00:15:53.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.578 "strip_size_kb": 64, 00:15:53.578 "state": "configuring", 00:15:53.578 "raid_level": "raid0", 00:15:53.578 "superblock": false, 00:15:53.578 "num_base_bdevs": 2, 00:15:53.578 "num_base_bdevs_discovered": 1, 00:15:53.578 "num_base_bdevs_operational": 2, 00:15:53.578 "base_bdevs_list": [ 00:15:53.578 { 00:15:53.578 "name": "BaseBdev1", 00:15:53.578 "uuid": "626621fd-c8f9-4887-bcda-08b0166b509e", 00:15:53.578 "is_configured": true, 00:15:53.578 "data_offset": 0, 00:15:53.578 "data_size": 65536 00:15:53.578 }, 00:15:53.578 { 00:15:53.578 "name": "BaseBdev2", 00:15:53.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.578 "is_configured": false, 00:15:53.578 "data_offset": 0, 00:15:53.578 "data_size": 0 00:15:53.578 } 00:15:53.578 ] 00:15:53.578 }' 00:15:53.578 08:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.578 08:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.154 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.412 [2024-07-12 08:42:29.573338] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.412 [2024-07-12 08:42:29.573409] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:15:54.412 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:54.670 [2024-07-12 08:42:29.845417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.670 [2024-07-12 08:42:29.847586] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.670 [2024-07-12 08:42:29.847791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.928 08:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.185 08:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.185 "name": "Existed_Raid", 00:15:55.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.185 "strip_size_kb": 64, 00:15:55.185 "state": "configuring", 00:15:55.185 "raid_level": "raid0", 00:15:55.185 "superblock": false, 00:15:55.185 "num_base_bdevs": 2, 00:15:55.185 "num_base_bdevs_discovered": 1, 00:15:55.185 "num_base_bdevs_operational": 2, 00:15:55.185 "base_bdevs_list": [ 00:15:55.185 { 00:15:55.185 "name": "BaseBdev1", 00:15:55.185 "uuid": "626621fd-c8f9-4887-bcda-08b0166b509e", 00:15:55.185 "is_configured": true, 00:15:55.185 "data_offset": 0, 00:15:55.185 "data_size": 65536 00:15:55.185 }, 00:15:55.185 { 00:15:55.185 "name": "BaseBdev2", 00:15:55.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.185 "is_configured": false, 00:15:55.185 "data_offset": 0, 00:15:55.185 "data_size": 0 00:15:55.185 } 00:15:55.185 ] 00:15:55.185 }' 00:15:55.185 08:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.185 08:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.750 08:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.007 [2024-07-12 08:42:31.090951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.007 [2024-07-12 08:42:31.091166] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:56.007 [2024-07-12 08:42:31.091211] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:56.007 [2024-07-12 08:42:31.091477] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:56.007 [2024-07-12 08:42:31.091962] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:56.007 [2024-07-12 08:42:31.092085] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:56.007 [2024-07-12 08:42:31.092470] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.007 BaseBdev2 00:15:56.007 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:56.007 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:56.007 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:56.007 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:56.007 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:56.007 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:56.007 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.264 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.521 [ 00:15:56.521 { 00:15:56.521 "name": "BaseBdev2", 00:15:56.521 "aliases": [ 00:15:56.521 "c004ca9d-9ad1-4e38-b0ee-992d180d40aa" 00:15:56.521 ], 00:15:56.521 "product_name": "Malloc disk", 00:15:56.521 "block_size": 512, 00:15:56.521 "num_blocks": 65536, 00:15:56.521 "uuid": "c004ca9d-9ad1-4e38-b0ee-992d180d40aa", 00:15:56.521 "assigned_rate_limits": { 00:15:56.521 "rw_ios_per_sec": 0, 00:15:56.521 "rw_mbytes_per_sec": 0, 00:15:56.521 "r_mbytes_per_sec": 0, 00:15:56.521 "w_mbytes_per_sec": 0 00:15:56.521 }, 00:15:56.521 "claimed": true, 00:15:56.521 "claim_type": "exclusive_write", 00:15:56.521 "zoned": false, 00:15:56.521 "supported_io_types": { 00:15:56.521 "read": true, 00:15:56.521 "write": true, 00:15:56.521 "unmap": true, 00:15:56.521 "flush": true, 00:15:56.521 "reset": true, 00:15:56.521 "nvme_admin": false, 00:15:56.521 "nvme_io": false, 00:15:56.521 "nvme_io_md": false, 00:15:56.521 "write_zeroes": true, 00:15:56.521 "zcopy": true, 00:15:56.521 "get_zone_info": false, 00:15:56.521 "zone_management": false, 00:15:56.521 "zone_append": false, 00:15:56.521 "compare": false, 00:15:56.521 "compare_and_write": false, 00:15:56.521 "abort": true, 00:15:56.521 "seek_hole": false, 00:15:56.521 "seek_data": false, 00:15:56.521 "copy": true, 00:15:56.521 "nvme_iov_md": false 00:15:56.521 }, 00:15:56.521 "memory_domains": [ 00:15:56.521 { 00:15:56.521 "dma_device_id": "system", 00:15:56.521 "dma_device_type": 1 00:15:56.521 }, 00:15:56.521 { 00:15:56.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.521 "dma_device_type": 2 00:15:56.521 } 00:15:56.521 ], 00:15:56.521 "driver_specific": {} 00:15:56.521 } 00:15:56.521 ] 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.521 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.778 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.778 "name": "Existed_Raid", 00:15:56.778 "uuid": "e7b4cbac-57db-4f6f-9d5e-b0f64eb81c50", 00:15:56.778 "strip_size_kb": 64, 00:15:56.778 "state": "online", 00:15:56.778 "raid_level": "raid0", 00:15:56.778 "superblock": false, 00:15:56.778 "num_base_bdevs": 2, 00:15:56.778 "num_base_bdevs_discovered": 2, 00:15:56.778 "num_base_bdevs_operational": 2, 00:15:56.778 "base_bdevs_list": [ 00:15:56.778 { 00:15:56.778 "name": "BaseBdev1", 00:15:56.778 "uuid": "626621fd-c8f9-4887-bcda-08b0166b509e", 00:15:56.778 "is_configured": true, 00:15:56.778 "data_offset": 0, 00:15:56.778 "data_size": 65536 00:15:56.778 }, 00:15:56.778 { 00:15:56.778 "name": "BaseBdev2", 00:15:56.778 "uuid": "c004ca9d-9ad1-4e38-b0ee-992d180d40aa", 00:15:56.778 "is_configured": true, 00:15:56.778 "data_offset": 0, 00:15:56.778 "data_size": 65536 00:15:56.778 } 00:15:56.778 ] 00:15:56.778 }' 00:15:56.778 08:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.778 08:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:57.711 [2024-07-12 08:42:32.823737] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.711 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:57.711 "name": "Existed_Raid", 00:15:57.711 "aliases": [ 00:15:57.711 "e7b4cbac-57db-4f6f-9d5e-b0f64eb81c50" 00:15:57.711 ], 00:15:57.711 "product_name": "Raid Volume", 00:15:57.711 "block_size": 512, 00:15:57.711 "num_blocks": 131072, 00:15:57.711 "uuid": "e7b4cbac-57db-4f6f-9d5e-b0f64eb81c50", 00:15:57.711 "assigned_rate_limits": { 00:15:57.711 "rw_ios_per_sec": 0, 00:15:57.711 "rw_mbytes_per_sec": 0, 00:15:57.711 "r_mbytes_per_sec": 0, 00:15:57.711 "w_mbytes_per_sec": 0 00:15:57.711 }, 00:15:57.712 "claimed": false, 00:15:57.712 "zoned": false, 00:15:57.712 "supported_io_types": { 00:15:57.712 "read": true, 00:15:57.712 "write": true, 00:15:57.712 "unmap": true, 00:15:57.712 "flush": true, 00:15:57.712 "reset": true, 00:15:57.712 "nvme_admin": false, 00:15:57.712 "nvme_io": false, 00:15:57.712 "nvme_io_md": false, 00:15:57.712 "write_zeroes": true, 00:15:57.712 "zcopy": false, 00:15:57.712 "get_zone_info": false, 00:15:57.712 "zone_management": false, 00:15:57.712 "zone_append": false, 00:15:57.712 "compare": false, 00:15:57.712 "compare_and_write": false, 00:15:57.712 "abort": false, 00:15:57.712 "seek_hole": false, 00:15:57.712 "seek_data": false, 00:15:57.712 "copy": false, 00:15:57.712 "nvme_iov_md": false 00:15:57.712 }, 00:15:57.712 "memory_domains": [ 00:15:57.712 { 00:15:57.712 "dma_device_id": "system", 00:15:57.712 "dma_device_type": 1 00:15:57.712 }, 00:15:57.712 { 00:15:57.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.712 "dma_device_type": 2 00:15:57.712 }, 00:15:57.712 { 00:15:57.712 "dma_device_id": "system", 00:15:57.712 "dma_device_type": 1 00:15:57.712 }, 00:15:57.712 { 00:15:57.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.712 "dma_device_type": 2 00:15:57.712 } 00:15:57.712 ], 00:15:57.712 "driver_specific": { 00:15:57.712 "raid": { 00:15:57.712 "uuid": "e7b4cbac-57db-4f6f-9d5e-b0f64eb81c50", 00:15:57.712 "strip_size_kb": 64, 00:15:57.712 "state": "online", 00:15:57.712 "raid_level": "raid0", 00:15:57.712 "superblock": false, 00:15:57.712 "num_base_bdevs": 2, 00:15:57.712 "num_base_bdevs_discovered": 2, 00:15:57.712 "num_base_bdevs_operational": 2, 00:15:57.712 "base_bdevs_list": [ 00:15:57.712 { 00:15:57.712 "name": "BaseBdev1", 00:15:57.712 "uuid": "626621fd-c8f9-4887-bcda-08b0166b509e", 00:15:57.712 "is_configured": true, 00:15:57.712 "data_offset": 0, 00:15:57.712 "data_size": 65536 00:15:57.712 }, 00:15:57.712 { 00:15:57.712 "name": "BaseBdev2", 00:15:57.712 "uuid": "c004ca9d-9ad1-4e38-b0ee-992d180d40aa", 00:15:57.712 "is_configured": true, 00:15:57.712 "data_offset": 0, 00:15:57.712 "data_size": 65536 00:15:57.712 } 00:15:57.712 ] 00:15:57.712 } 00:15:57.712 } 00:15:57.712 }' 00:15:57.712 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.712 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:57.712 BaseBdev2' 00:15:57.712 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.712 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:57.712 08:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.971 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.971 "name": "BaseBdev1", 00:15:57.971 "aliases": [ 00:15:57.971 "626621fd-c8f9-4887-bcda-08b0166b509e" 00:15:57.971 ], 00:15:57.971 "product_name": "Malloc disk", 00:15:57.971 "block_size": 512, 00:15:57.971 "num_blocks": 65536, 00:15:57.971 "uuid": "626621fd-c8f9-4887-bcda-08b0166b509e", 00:15:57.971 "assigned_rate_limits": { 00:15:57.971 "rw_ios_per_sec": 0, 00:15:57.971 "rw_mbytes_per_sec": 0, 00:15:57.971 "r_mbytes_per_sec": 0, 00:15:57.971 "w_mbytes_per_sec": 0 00:15:57.971 }, 00:15:57.971 "claimed": true, 00:15:57.971 "claim_type": "exclusive_write", 00:15:57.971 "zoned": false, 00:15:57.971 "supported_io_types": { 00:15:57.971 "read": true, 00:15:57.971 "write": true, 00:15:57.971 "unmap": true, 00:15:57.971 "flush": true, 00:15:57.971 "reset": true, 00:15:57.971 "nvme_admin": false, 00:15:57.971 "nvme_io": false, 00:15:57.971 "nvme_io_md": false, 00:15:57.971 "write_zeroes": true, 00:15:57.971 "zcopy": true, 00:15:57.971 "get_zone_info": false, 00:15:57.971 "zone_management": false, 00:15:57.971 "zone_append": false, 00:15:57.971 "compare": false, 00:15:57.971 "compare_and_write": false, 00:15:57.971 "abort": true, 00:15:57.971 "seek_hole": false, 00:15:57.971 "seek_data": false, 00:15:57.971 "copy": true, 00:15:57.971 "nvme_iov_md": false 00:15:57.971 }, 00:15:57.971 "memory_domains": [ 00:15:57.971 { 00:15:57.971 "dma_device_id": "system", 00:15:57.971 "dma_device_type": 1 00:15:57.971 }, 00:15:57.971 { 00:15:57.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.971 "dma_device_type": 2 00:15:57.971 } 00:15:57.971 ], 00:15:57.971 "driver_specific": {} 00:15:57.971 }' 00:15:57.971 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.293 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.293 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.293 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.293 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.293 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.293 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.293 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.566 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.566 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.566 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.566 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.566 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.566 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:58.566 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.825 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.825 "name": "BaseBdev2", 00:15:58.825 "aliases": [ 00:15:58.825 "c004ca9d-9ad1-4e38-b0ee-992d180d40aa" 00:15:58.825 ], 00:15:58.825 "product_name": "Malloc disk", 00:15:58.825 "block_size": 512, 00:15:58.825 "num_blocks": 65536, 00:15:58.825 "uuid": "c004ca9d-9ad1-4e38-b0ee-992d180d40aa", 00:15:58.825 "assigned_rate_limits": { 00:15:58.825 "rw_ios_per_sec": 0, 00:15:58.825 "rw_mbytes_per_sec": 0, 00:15:58.825 "r_mbytes_per_sec": 0, 00:15:58.825 "w_mbytes_per_sec": 0 00:15:58.825 }, 00:15:58.825 "claimed": true, 00:15:58.825 "claim_type": "exclusive_write", 00:15:58.825 "zoned": false, 00:15:58.825 "supported_io_types": { 00:15:58.825 "read": true, 00:15:58.825 "write": true, 00:15:58.825 "unmap": true, 00:15:58.825 "flush": true, 00:15:58.825 "reset": true, 00:15:58.825 "nvme_admin": false, 00:15:58.825 "nvme_io": false, 00:15:58.825 "nvme_io_md": false, 00:15:58.825 "write_zeroes": true, 00:15:58.825 "zcopy": true, 00:15:58.825 "get_zone_info": false, 00:15:58.825 "zone_management": false, 00:15:58.825 "zone_append": false, 00:15:58.825 "compare": false, 00:15:58.825 "compare_and_write": false, 00:15:58.825 "abort": true, 00:15:58.825 "seek_hole": false, 00:15:58.825 "seek_data": false, 00:15:58.825 "copy": true, 00:15:58.825 "nvme_iov_md": false 00:15:58.825 }, 00:15:58.825 "memory_domains": [ 00:15:58.825 { 00:15:58.825 "dma_device_id": "system", 00:15:58.825 "dma_device_type": 1 00:15:58.825 }, 00:15:58.825 { 00:15:58.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.825 "dma_device_type": 2 00:15:58.825 } 00:15:58.825 ], 00:15:58.825 "driver_specific": {} 00:15:58.825 }' 00:15:58.825 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.825 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.825 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.825 08:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.082 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.082 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:59.082 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.082 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.082 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:59.082 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.082 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.340 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:59.340 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:59.598 [2024-07-12 08:42:34.571941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.598 [2024-07-12 08:42:34.572136] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.598 [2024-07-12 08:42:34.572351] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.598 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.856 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.856 "name": "Existed_Raid", 00:15:59.856 "uuid": "e7b4cbac-57db-4f6f-9d5e-b0f64eb81c50", 00:15:59.856 "strip_size_kb": 64, 00:15:59.856 "state": "offline", 00:15:59.856 "raid_level": "raid0", 00:15:59.856 "superblock": false, 00:15:59.856 "num_base_bdevs": 2, 00:15:59.856 "num_base_bdevs_discovered": 1, 00:15:59.856 "num_base_bdevs_operational": 1, 00:15:59.856 "base_bdevs_list": [ 00:15:59.856 { 00:15:59.856 "name": null, 00:15:59.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.856 "is_configured": false, 00:15:59.856 "data_offset": 0, 00:15:59.856 "data_size": 65536 00:15:59.856 }, 00:15:59.856 { 00:15:59.856 "name": "BaseBdev2", 00:15:59.856 "uuid": "c004ca9d-9ad1-4e38-b0ee-992d180d40aa", 00:15:59.856 "is_configured": true, 00:15:59.856 "data_offset": 0, 00:15:59.856 "data_size": 65536 00:15:59.856 } 00:15:59.856 ] 00:15:59.856 }' 00:15:59.856 08:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.856 08:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.421 08:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:00.421 08:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.421 08:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.421 08:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:00.679 08:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:00.679 08:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.679 08:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:00.937 [2024-07-12 08:42:36.024002] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.937 [2024-07-12 08:42:36.024298] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:00.937 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:00.937 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.937 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.937 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 120799 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 120799 ']' 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 120799 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120799 00:16:01.504 killing process with pid 120799 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120799' 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 120799 00:16:01.504 08:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 120799 00:16:01.504 [2024-07-12 08:42:36.418502] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.504 [2024-07-12 08:42:36.418650] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.436 ************************************ 00:16:02.436 END TEST raid_state_function_test 00:16:02.436 ************************************ 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:02.436 00:16:02.436 real 0m12.722s 00:16:02.436 user 0m22.639s 00:16:02.436 sys 0m1.408s 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.436 08:42:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:02.436 08:42:37 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:16:02.436 08:42:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:02.436 08:42:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.436 08:42:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.436 ************************************ 00:16:02.436 START TEST raid_state_function_test_sb 00:16:02.436 ************************************ 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:02.436 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=121205 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:02.437 Process raid pid: 121205 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121205' 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 121205 /var/tmp/spdk-raid.sock 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 121205 ']' 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:02.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.437 08:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.694 [2024-07-12 08:42:37.703104] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:16:02.694 [2024-07-12 08:42:37.703408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.694 [2024-07-12 08:42:37.881678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.951 [2024-07-12 08:42:38.093879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.209 [2024-07-12 08:42:38.296853] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.466 08:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.466 08:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:03.466 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:04.042 [2024-07-12 08:42:38.934586] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.043 [2024-07-12 08:42:38.934697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.043 [2024-07-12 08:42:38.934714] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.043 [2024-07-12 08:42:38.934745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.043 08:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.306 08:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.306 "name": "Existed_Raid", 00:16:04.306 "uuid": "f03695bf-784b-4639-8915-d25a8af33ec3", 00:16:04.306 "strip_size_kb": 64, 00:16:04.306 "state": "configuring", 00:16:04.306 "raid_level": "raid0", 00:16:04.306 "superblock": true, 00:16:04.306 "num_base_bdevs": 2, 00:16:04.306 "num_base_bdevs_discovered": 0, 00:16:04.306 "num_base_bdevs_operational": 2, 00:16:04.306 "base_bdevs_list": [ 00:16:04.306 { 00:16:04.306 "name": "BaseBdev1", 00:16:04.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.306 "is_configured": false, 00:16:04.306 "data_offset": 0, 00:16:04.306 "data_size": 0 00:16:04.306 }, 00:16:04.306 { 00:16:04.306 "name": "BaseBdev2", 00:16:04.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.306 "is_configured": false, 00:16:04.306 "data_offset": 0, 00:16:04.306 "data_size": 0 00:16:04.306 } 00:16:04.306 ] 00:16:04.306 }' 00:16:04.306 08:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.306 08:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.894 08:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.151 [2024-07-12 08:42:40.158678] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.151 [2024-07-12 08:42:40.158741] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:05.151 08:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:05.408 [2024-07-12 08:42:40.398764] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.408 [2024-07-12 08:42:40.398848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.408 [2024-07-12 08:42:40.398863] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.408 [2024-07-12 08:42:40.398891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.408 08:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:05.665 [2024-07-12 08:42:40.674151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.665 BaseBdev1 00:16:05.665 08:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:05.665 08:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:05.665 08:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.665 08:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:05.665 08:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.665 08:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.665 08:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.923 08:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.181 [ 00:16:06.181 { 00:16:06.181 "name": "BaseBdev1", 00:16:06.181 "aliases": [ 00:16:06.181 "97b96371-8688-4aad-8f1f-6c4fb746c7a9" 00:16:06.181 ], 00:16:06.181 "product_name": "Malloc disk", 00:16:06.181 "block_size": 512, 00:16:06.181 "num_blocks": 65536, 00:16:06.181 "uuid": "97b96371-8688-4aad-8f1f-6c4fb746c7a9", 00:16:06.181 "assigned_rate_limits": { 00:16:06.181 "rw_ios_per_sec": 0, 00:16:06.181 "rw_mbytes_per_sec": 0, 00:16:06.181 "r_mbytes_per_sec": 0, 00:16:06.181 "w_mbytes_per_sec": 0 00:16:06.181 }, 00:16:06.181 "claimed": true, 00:16:06.181 "claim_type": "exclusive_write", 00:16:06.181 "zoned": false, 00:16:06.181 "supported_io_types": { 00:16:06.181 "read": true, 00:16:06.181 "write": true, 00:16:06.181 "unmap": true, 00:16:06.181 "flush": true, 00:16:06.181 "reset": true, 00:16:06.181 "nvme_admin": false, 00:16:06.181 "nvme_io": false, 00:16:06.181 "nvme_io_md": false, 00:16:06.181 "write_zeroes": true, 00:16:06.181 "zcopy": true, 00:16:06.181 "get_zone_info": false, 00:16:06.181 "zone_management": false, 00:16:06.181 "zone_append": false, 00:16:06.181 "compare": false, 00:16:06.181 "compare_and_write": false, 00:16:06.181 "abort": true, 00:16:06.181 "seek_hole": false, 00:16:06.181 "seek_data": false, 00:16:06.181 "copy": true, 00:16:06.181 "nvme_iov_md": false 00:16:06.181 }, 00:16:06.181 "memory_domains": [ 00:16:06.181 { 00:16:06.181 "dma_device_id": "system", 00:16:06.181 "dma_device_type": 1 00:16:06.181 }, 00:16:06.181 { 00:16:06.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.182 "dma_device_type": 2 00:16:06.182 } 00:16:06.182 ], 00:16:06.182 "driver_specific": {} 00:16:06.182 } 00:16:06.182 ] 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.182 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.440 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.440 "name": "Existed_Raid", 00:16:06.440 "uuid": "f05018d7-c8f3-458d-8507-04860ba0f112", 00:16:06.440 "strip_size_kb": 64, 00:16:06.440 "state": "configuring", 00:16:06.440 "raid_level": "raid0", 00:16:06.440 "superblock": true, 00:16:06.440 "num_base_bdevs": 2, 00:16:06.440 "num_base_bdevs_discovered": 1, 00:16:06.440 "num_base_bdevs_operational": 2, 00:16:06.440 "base_bdevs_list": [ 00:16:06.440 { 00:16:06.440 "name": "BaseBdev1", 00:16:06.440 "uuid": "97b96371-8688-4aad-8f1f-6c4fb746c7a9", 00:16:06.440 "is_configured": true, 00:16:06.440 "data_offset": 2048, 00:16:06.440 "data_size": 63488 00:16:06.440 }, 00:16:06.440 { 00:16:06.440 "name": "BaseBdev2", 00:16:06.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.440 "is_configured": false, 00:16:06.440 "data_offset": 0, 00:16:06.440 "data_size": 0 00:16:06.440 } 00:16:06.440 ] 00:16:06.440 }' 00:16:06.440 08:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.440 08:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.064 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.322 [2024-07-12 08:42:42.510627] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.322 [2024-07-12 08:42:42.510699] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:16:07.579 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:07.579 [2024-07-12 08:42:42.746724] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.580 [2024-07-12 08:42:42.748812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.580 [2024-07-12 08:42:42.748879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.580 08:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.145 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.145 "name": "Existed_Raid", 00:16:08.146 "uuid": "c9a8fa0a-d38d-4baf-bb28-76976b65166f", 00:16:08.146 "strip_size_kb": 64, 00:16:08.146 "state": "configuring", 00:16:08.146 "raid_level": "raid0", 00:16:08.146 "superblock": true, 00:16:08.146 "num_base_bdevs": 2, 00:16:08.146 "num_base_bdevs_discovered": 1, 00:16:08.146 "num_base_bdevs_operational": 2, 00:16:08.146 "base_bdevs_list": [ 00:16:08.146 { 00:16:08.146 "name": "BaseBdev1", 00:16:08.146 "uuid": "97b96371-8688-4aad-8f1f-6c4fb746c7a9", 00:16:08.146 "is_configured": true, 00:16:08.146 "data_offset": 2048, 00:16:08.146 "data_size": 63488 00:16:08.146 }, 00:16:08.146 { 00:16:08.146 "name": "BaseBdev2", 00:16:08.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.146 "is_configured": false, 00:16:08.146 "data_offset": 0, 00:16:08.146 "data_size": 0 00:16:08.146 } 00:16:08.146 ] 00:16:08.146 }' 00:16:08.146 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.146 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.710 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.969 [2024-07-12 08:42:44.016256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.969 [2024-07-12 08:42:44.016541] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:08.969 [2024-07-12 08:42:44.016567] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:08.969 [2024-07-12 08:42:44.016710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:08.969 BaseBdev2 00:16:08.969 [2024-07-12 08:42:44.017084] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:08.969 [2024-07-12 08:42:44.017111] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:08.969 [2024-07-12 08:42:44.017274] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.969 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:08.969 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:08.969 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.969 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:08.969 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.969 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.969 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.227 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:09.485 [ 00:16:09.485 { 00:16:09.485 "name": "BaseBdev2", 00:16:09.485 "aliases": [ 00:16:09.485 "3c8f4d83-be9e-4ab9-af6c-c76a874e2f9e" 00:16:09.485 ], 00:16:09.485 "product_name": "Malloc disk", 00:16:09.485 "block_size": 512, 00:16:09.485 "num_blocks": 65536, 00:16:09.485 "uuid": "3c8f4d83-be9e-4ab9-af6c-c76a874e2f9e", 00:16:09.485 "assigned_rate_limits": { 00:16:09.485 "rw_ios_per_sec": 0, 00:16:09.485 "rw_mbytes_per_sec": 0, 00:16:09.485 "r_mbytes_per_sec": 0, 00:16:09.485 "w_mbytes_per_sec": 0 00:16:09.485 }, 00:16:09.485 "claimed": true, 00:16:09.485 "claim_type": "exclusive_write", 00:16:09.485 "zoned": false, 00:16:09.485 "supported_io_types": { 00:16:09.485 "read": true, 00:16:09.485 "write": true, 00:16:09.485 "unmap": true, 00:16:09.485 "flush": true, 00:16:09.485 "reset": true, 00:16:09.485 "nvme_admin": false, 00:16:09.485 "nvme_io": false, 00:16:09.485 "nvme_io_md": false, 00:16:09.485 "write_zeroes": true, 00:16:09.485 "zcopy": true, 00:16:09.485 "get_zone_info": false, 00:16:09.485 "zone_management": false, 00:16:09.485 "zone_append": false, 00:16:09.485 "compare": false, 00:16:09.485 "compare_and_write": false, 00:16:09.485 "abort": true, 00:16:09.485 "seek_hole": false, 00:16:09.485 "seek_data": false, 00:16:09.485 "copy": true, 00:16:09.485 "nvme_iov_md": false 00:16:09.485 }, 00:16:09.485 "memory_domains": [ 00:16:09.485 { 00:16:09.485 "dma_device_id": "system", 00:16:09.485 "dma_device_type": 1 00:16:09.485 }, 00:16:09.485 { 00:16:09.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.485 "dma_device_type": 2 00:16:09.485 } 00:16:09.485 ], 00:16:09.485 "driver_specific": {} 00:16:09.485 } 00:16:09.485 ] 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.485 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.743 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.743 "name": "Existed_Raid", 00:16:09.743 "uuid": "c9a8fa0a-d38d-4baf-bb28-76976b65166f", 00:16:09.743 "strip_size_kb": 64, 00:16:09.743 "state": "online", 00:16:09.743 "raid_level": "raid0", 00:16:09.743 "superblock": true, 00:16:09.743 "num_base_bdevs": 2, 00:16:09.743 "num_base_bdevs_discovered": 2, 00:16:09.743 "num_base_bdevs_operational": 2, 00:16:09.743 "base_bdevs_list": [ 00:16:09.743 { 00:16:09.743 "name": "BaseBdev1", 00:16:09.743 "uuid": "97b96371-8688-4aad-8f1f-6c4fb746c7a9", 00:16:09.743 "is_configured": true, 00:16:09.743 "data_offset": 2048, 00:16:09.743 "data_size": 63488 00:16:09.743 }, 00:16:09.743 { 00:16:09.743 "name": "BaseBdev2", 00:16:09.743 "uuid": "3c8f4d83-be9e-4ab9-af6c-c76a874e2f9e", 00:16:09.743 "is_configured": true, 00:16:09.743 "data_offset": 2048, 00:16:09.743 "data_size": 63488 00:16:09.743 } 00:16:09.743 ] 00:16:09.743 }' 00:16:09.743 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.743 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:10.309 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:10.566 [2024-07-12 08:42:45.750268] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.824 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:10.824 "name": "Existed_Raid", 00:16:10.824 "aliases": [ 00:16:10.824 "c9a8fa0a-d38d-4baf-bb28-76976b65166f" 00:16:10.824 ], 00:16:10.824 "product_name": "Raid Volume", 00:16:10.824 "block_size": 512, 00:16:10.824 "num_blocks": 126976, 00:16:10.824 "uuid": "c9a8fa0a-d38d-4baf-bb28-76976b65166f", 00:16:10.824 "assigned_rate_limits": { 00:16:10.824 "rw_ios_per_sec": 0, 00:16:10.824 "rw_mbytes_per_sec": 0, 00:16:10.824 "r_mbytes_per_sec": 0, 00:16:10.824 "w_mbytes_per_sec": 0 00:16:10.824 }, 00:16:10.824 "claimed": false, 00:16:10.824 "zoned": false, 00:16:10.824 "supported_io_types": { 00:16:10.824 "read": true, 00:16:10.824 "write": true, 00:16:10.824 "unmap": true, 00:16:10.824 "flush": true, 00:16:10.824 "reset": true, 00:16:10.824 "nvme_admin": false, 00:16:10.824 "nvme_io": false, 00:16:10.824 "nvme_io_md": false, 00:16:10.824 "write_zeroes": true, 00:16:10.824 "zcopy": false, 00:16:10.824 "get_zone_info": false, 00:16:10.824 "zone_management": false, 00:16:10.824 "zone_append": false, 00:16:10.824 "compare": false, 00:16:10.824 "compare_and_write": false, 00:16:10.824 "abort": false, 00:16:10.824 "seek_hole": false, 00:16:10.824 "seek_data": false, 00:16:10.824 "copy": false, 00:16:10.824 "nvme_iov_md": false 00:16:10.824 }, 00:16:10.824 "memory_domains": [ 00:16:10.824 { 00:16:10.824 "dma_device_id": "system", 00:16:10.824 "dma_device_type": 1 00:16:10.824 }, 00:16:10.824 { 00:16:10.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.825 "dma_device_type": 2 00:16:10.825 }, 00:16:10.825 { 00:16:10.825 "dma_device_id": "system", 00:16:10.825 "dma_device_type": 1 00:16:10.825 }, 00:16:10.825 { 00:16:10.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.825 "dma_device_type": 2 00:16:10.825 } 00:16:10.825 ], 00:16:10.825 "driver_specific": { 00:16:10.825 "raid": { 00:16:10.825 "uuid": "c9a8fa0a-d38d-4baf-bb28-76976b65166f", 00:16:10.825 "strip_size_kb": 64, 00:16:10.825 "state": "online", 00:16:10.825 "raid_level": "raid0", 00:16:10.825 "superblock": true, 00:16:10.825 "num_base_bdevs": 2, 00:16:10.825 "num_base_bdevs_discovered": 2, 00:16:10.825 "num_base_bdevs_operational": 2, 00:16:10.825 "base_bdevs_list": [ 00:16:10.825 { 00:16:10.825 "name": "BaseBdev1", 00:16:10.825 "uuid": "97b96371-8688-4aad-8f1f-6c4fb746c7a9", 00:16:10.825 "is_configured": true, 00:16:10.825 "data_offset": 2048, 00:16:10.825 "data_size": 63488 00:16:10.825 }, 00:16:10.825 { 00:16:10.825 "name": "BaseBdev2", 00:16:10.825 "uuid": "3c8f4d83-be9e-4ab9-af6c-c76a874e2f9e", 00:16:10.825 "is_configured": true, 00:16:10.825 "data_offset": 2048, 00:16:10.825 "data_size": 63488 00:16:10.825 } 00:16:10.825 ] 00:16:10.825 } 00:16:10.825 } 00:16:10.825 }' 00:16:10.825 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.825 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:10.825 BaseBdev2' 00:16:10.825 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.825 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:10.825 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.083 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.083 "name": "BaseBdev1", 00:16:11.083 "aliases": [ 00:16:11.083 "97b96371-8688-4aad-8f1f-6c4fb746c7a9" 00:16:11.083 ], 00:16:11.083 "product_name": "Malloc disk", 00:16:11.083 "block_size": 512, 00:16:11.083 "num_blocks": 65536, 00:16:11.083 "uuid": "97b96371-8688-4aad-8f1f-6c4fb746c7a9", 00:16:11.083 "assigned_rate_limits": { 00:16:11.083 "rw_ios_per_sec": 0, 00:16:11.083 "rw_mbytes_per_sec": 0, 00:16:11.084 "r_mbytes_per_sec": 0, 00:16:11.084 "w_mbytes_per_sec": 0 00:16:11.084 }, 00:16:11.084 "claimed": true, 00:16:11.084 "claim_type": "exclusive_write", 00:16:11.084 "zoned": false, 00:16:11.084 "supported_io_types": { 00:16:11.084 "read": true, 00:16:11.084 "write": true, 00:16:11.084 "unmap": true, 00:16:11.084 "flush": true, 00:16:11.084 "reset": true, 00:16:11.084 "nvme_admin": false, 00:16:11.084 "nvme_io": false, 00:16:11.084 "nvme_io_md": false, 00:16:11.084 "write_zeroes": true, 00:16:11.084 "zcopy": true, 00:16:11.084 "get_zone_info": false, 00:16:11.084 "zone_management": false, 00:16:11.084 "zone_append": false, 00:16:11.084 "compare": false, 00:16:11.084 "compare_and_write": false, 00:16:11.084 "abort": true, 00:16:11.084 "seek_hole": false, 00:16:11.084 "seek_data": false, 00:16:11.084 "copy": true, 00:16:11.084 "nvme_iov_md": false 00:16:11.084 }, 00:16:11.084 "memory_domains": [ 00:16:11.084 { 00:16:11.084 "dma_device_id": "system", 00:16:11.084 "dma_device_type": 1 00:16:11.084 }, 00:16:11.084 { 00:16:11.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.084 "dma_device_type": 2 00:16:11.084 } 00:16:11.084 ], 00:16:11.084 "driver_specific": {} 00:16:11.084 }' 00:16:11.084 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.084 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.084 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.084 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.084 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.341 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.341 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.341 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.341 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.341 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.341 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.598 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.598 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.598 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:11.598 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.856 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.856 "name": "BaseBdev2", 00:16:11.856 "aliases": [ 00:16:11.856 "3c8f4d83-be9e-4ab9-af6c-c76a874e2f9e" 00:16:11.856 ], 00:16:11.856 "product_name": "Malloc disk", 00:16:11.856 "block_size": 512, 00:16:11.856 "num_blocks": 65536, 00:16:11.856 "uuid": "3c8f4d83-be9e-4ab9-af6c-c76a874e2f9e", 00:16:11.856 "assigned_rate_limits": { 00:16:11.856 "rw_ios_per_sec": 0, 00:16:11.856 "rw_mbytes_per_sec": 0, 00:16:11.856 "r_mbytes_per_sec": 0, 00:16:11.856 "w_mbytes_per_sec": 0 00:16:11.856 }, 00:16:11.856 "claimed": true, 00:16:11.856 "claim_type": "exclusive_write", 00:16:11.856 "zoned": false, 00:16:11.856 "supported_io_types": { 00:16:11.856 "read": true, 00:16:11.856 "write": true, 00:16:11.856 "unmap": true, 00:16:11.856 "flush": true, 00:16:11.856 "reset": true, 00:16:11.856 "nvme_admin": false, 00:16:11.856 "nvme_io": false, 00:16:11.856 "nvme_io_md": false, 00:16:11.856 "write_zeroes": true, 00:16:11.856 "zcopy": true, 00:16:11.856 "get_zone_info": false, 00:16:11.856 "zone_management": false, 00:16:11.856 "zone_append": false, 00:16:11.856 "compare": false, 00:16:11.856 "compare_and_write": false, 00:16:11.856 "abort": true, 00:16:11.856 "seek_hole": false, 00:16:11.856 "seek_data": false, 00:16:11.856 "copy": true, 00:16:11.856 "nvme_iov_md": false 00:16:11.856 }, 00:16:11.856 "memory_domains": [ 00:16:11.856 { 00:16:11.856 "dma_device_id": "system", 00:16:11.856 "dma_device_type": 1 00:16:11.856 }, 00:16:11.856 { 00:16:11.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.856 "dma_device_type": 2 00:16:11.856 } 00:16:11.856 ], 00:16:11.856 "driver_specific": {} 00:16:11.856 }' 00:16:11.856 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.856 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.856 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.856 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.856 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.856 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.856 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.114 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.114 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.114 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.114 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.114 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.114 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:12.371 [2024-07-12 08:42:47.482454] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.371 [2024-07-12 08:42:47.482508] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.371 [2024-07-12 08:42:47.482581] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.629 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.887 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.887 "name": "Existed_Raid", 00:16:12.887 "uuid": "c9a8fa0a-d38d-4baf-bb28-76976b65166f", 00:16:12.887 "strip_size_kb": 64, 00:16:12.887 "state": "offline", 00:16:12.887 "raid_level": "raid0", 00:16:12.887 "superblock": true, 00:16:12.887 "num_base_bdevs": 2, 00:16:12.887 "num_base_bdevs_discovered": 1, 00:16:12.887 "num_base_bdevs_operational": 1, 00:16:12.887 "base_bdevs_list": [ 00:16:12.887 { 00:16:12.887 "name": null, 00:16:12.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.887 "is_configured": false, 00:16:12.887 "data_offset": 2048, 00:16:12.887 "data_size": 63488 00:16:12.887 }, 00:16:12.887 { 00:16:12.887 "name": "BaseBdev2", 00:16:12.887 "uuid": "3c8f4d83-be9e-4ab9-af6c-c76a874e2f9e", 00:16:12.887 "is_configured": true, 00:16:12.887 "data_offset": 2048, 00:16:12.887 "data_size": 63488 00:16:12.887 } 00:16:12.887 ] 00:16:12.887 }' 00:16:12.887 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.887 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.453 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:13.453 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:13.453 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.453 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:13.711 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:13.711 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.711 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:13.969 [2024-07-12 08:42:49.092858] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.969 [2024-07-12 08:42:49.092954] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:14.228 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:14.228 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:14.228 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.228 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 121205 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 121205 ']' 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 121205 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.485 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121205 00:16:14.486 killing process with pid 121205 00:16:14.486 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.486 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.486 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121205' 00:16:14.486 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 121205 00:16:14.486 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 121205 00:16:14.486 [2024-07-12 08:42:49.493336] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.486 [2024-07-12 08:42:49.493457] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.860 ************************************ 00:16:15.860 END TEST raid_state_function_test_sb 00:16:15.860 ************************************ 00:16:15.860 08:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:15.860 00:16:15.860 real 0m13.106s 00:16:15.860 user 0m23.292s 00:16:15.860 sys 0m1.439s 00:16:15.860 08:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.860 08:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.860 08:42:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:15.860 08:42:50 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:16:15.860 08:42:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:15.860 08:42:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.860 08:42:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.860 ************************************ 00:16:15.860 START TEST raid_superblock_test 00:16:15.860 ************************************ 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=121631 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 121631 /var/tmp/spdk-raid.sock 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 121631 ']' 00:16:15.860 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.861 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.861 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.861 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.861 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.861 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:15.861 [2024-07-12 08:42:50.841473] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:16:15.861 [2024-07-12 08:42:50.841863] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121631 ] 00:16:15.861 [2024-07-12 08:42:51.010069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.119 [2024-07-12 08:42:51.248269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.376 [2024-07-12 08:42:51.450426] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:17.200 malloc1 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.200 [2024-07-12 08:42:52.362962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.200 [2024-07-12 08:42:52.363095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.200 [2024-07-12 08:42:52.363139] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:17.200 [2024-07-12 08:42:52.363161] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.200 [2024-07-12 08:42:52.365779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.200 [2024-07-12 08:42:52.365835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.200 pt1 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.200 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:17.769 malloc2 00:16:17.769 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.769 [2024-07-12 08:42:52.886353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.769 [2024-07-12 08:42:52.886507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.769 [2024-07-12 08:42:52.886554] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:17.769 [2024-07-12 08:42:52.886578] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.769 [2024-07-12 08:42:52.889220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.769 [2024-07-12 08:42:52.889275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.769 pt2 00:16:17.769 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:17.769 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:17.769 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:18.025 [2024-07-12 08:42:53.186512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.025 [2024-07-12 08:42:53.188699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.025 [2024-07-12 08:42:53.188962] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:16:18.025 [2024-07-12 08:42:53.188992] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:18.025 [2024-07-12 08:42:53.189169] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:18.025 [2024-07-12 08:42:53.189563] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:16:18.025 [2024-07-12 08:42:53.189589] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:16:18.025 [2024-07-12 08:42:53.189767] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.025 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.281 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.281 "name": "raid_bdev1", 00:16:18.281 "uuid": "a12359a7-b523-4973-8b4f-a10578dd28a6", 00:16:18.281 "strip_size_kb": 64, 00:16:18.281 "state": "online", 00:16:18.281 "raid_level": "raid0", 00:16:18.281 "superblock": true, 00:16:18.281 "num_base_bdevs": 2, 00:16:18.281 "num_base_bdevs_discovered": 2, 00:16:18.281 "num_base_bdevs_operational": 2, 00:16:18.281 "base_bdevs_list": [ 00:16:18.281 { 00:16:18.281 "name": "pt1", 00:16:18.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.281 "is_configured": true, 00:16:18.281 "data_offset": 2048, 00:16:18.281 "data_size": 63488 00:16:18.281 }, 00:16:18.281 { 00:16:18.281 "name": "pt2", 00:16:18.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.281 "is_configured": true, 00:16:18.281 "data_offset": 2048, 00:16:18.281 "data_size": 63488 00:16:18.281 } 00:16:18.281 ] 00:16:18.281 }' 00:16:18.281 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.281 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.214 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:19.472 [2024-07-12 08:42:54.455376] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.472 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:19.472 "name": "raid_bdev1", 00:16:19.472 "aliases": [ 00:16:19.472 "a12359a7-b523-4973-8b4f-a10578dd28a6" 00:16:19.472 ], 00:16:19.472 "product_name": "Raid Volume", 00:16:19.472 "block_size": 512, 00:16:19.472 "num_blocks": 126976, 00:16:19.472 "uuid": "a12359a7-b523-4973-8b4f-a10578dd28a6", 00:16:19.472 "assigned_rate_limits": { 00:16:19.472 "rw_ios_per_sec": 0, 00:16:19.472 "rw_mbytes_per_sec": 0, 00:16:19.472 "r_mbytes_per_sec": 0, 00:16:19.472 "w_mbytes_per_sec": 0 00:16:19.472 }, 00:16:19.472 "claimed": false, 00:16:19.472 "zoned": false, 00:16:19.472 "supported_io_types": { 00:16:19.472 "read": true, 00:16:19.472 "write": true, 00:16:19.472 "unmap": true, 00:16:19.472 "flush": true, 00:16:19.472 "reset": true, 00:16:19.472 "nvme_admin": false, 00:16:19.472 "nvme_io": false, 00:16:19.472 "nvme_io_md": false, 00:16:19.472 "write_zeroes": true, 00:16:19.472 "zcopy": false, 00:16:19.472 "get_zone_info": false, 00:16:19.472 "zone_management": false, 00:16:19.472 "zone_append": false, 00:16:19.472 "compare": false, 00:16:19.472 "compare_and_write": false, 00:16:19.472 "abort": false, 00:16:19.472 "seek_hole": false, 00:16:19.472 "seek_data": false, 00:16:19.472 "copy": false, 00:16:19.472 "nvme_iov_md": false 00:16:19.472 }, 00:16:19.472 "memory_domains": [ 00:16:19.472 { 00:16:19.472 "dma_device_id": "system", 00:16:19.472 "dma_device_type": 1 00:16:19.472 }, 00:16:19.472 { 00:16:19.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.472 "dma_device_type": 2 00:16:19.472 }, 00:16:19.472 { 00:16:19.472 "dma_device_id": "system", 00:16:19.472 "dma_device_type": 1 00:16:19.472 }, 00:16:19.472 { 00:16:19.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.472 "dma_device_type": 2 00:16:19.472 } 00:16:19.472 ], 00:16:19.472 "driver_specific": { 00:16:19.472 "raid": { 00:16:19.472 "uuid": "a12359a7-b523-4973-8b4f-a10578dd28a6", 00:16:19.472 "strip_size_kb": 64, 00:16:19.472 "state": "online", 00:16:19.472 "raid_level": "raid0", 00:16:19.472 "superblock": true, 00:16:19.472 "num_base_bdevs": 2, 00:16:19.472 "num_base_bdevs_discovered": 2, 00:16:19.472 "num_base_bdevs_operational": 2, 00:16:19.472 "base_bdevs_list": [ 00:16:19.472 { 00:16:19.472 "name": "pt1", 00:16:19.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.472 "is_configured": true, 00:16:19.472 "data_offset": 2048, 00:16:19.472 "data_size": 63488 00:16:19.472 }, 00:16:19.472 { 00:16:19.472 "name": "pt2", 00:16:19.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.472 "is_configured": true, 00:16:19.472 "data_offset": 2048, 00:16:19.472 "data_size": 63488 00:16:19.472 } 00:16:19.472 ] 00:16:19.472 } 00:16:19.472 } 00:16:19.472 }' 00:16:19.472 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.472 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:19.472 pt2' 00:16:19.472 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:19.472 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:19.472 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:19.730 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:19.730 "name": "pt1", 00:16:19.730 "aliases": [ 00:16:19.730 "00000000-0000-0000-0000-000000000001" 00:16:19.730 ], 00:16:19.730 "product_name": "passthru", 00:16:19.730 "block_size": 512, 00:16:19.730 "num_blocks": 65536, 00:16:19.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.730 "assigned_rate_limits": { 00:16:19.730 "rw_ios_per_sec": 0, 00:16:19.730 "rw_mbytes_per_sec": 0, 00:16:19.730 "r_mbytes_per_sec": 0, 00:16:19.730 "w_mbytes_per_sec": 0 00:16:19.730 }, 00:16:19.730 "claimed": true, 00:16:19.730 "claim_type": "exclusive_write", 00:16:19.730 "zoned": false, 00:16:19.730 "supported_io_types": { 00:16:19.730 "read": true, 00:16:19.730 "write": true, 00:16:19.730 "unmap": true, 00:16:19.730 "flush": true, 00:16:19.730 "reset": true, 00:16:19.730 "nvme_admin": false, 00:16:19.730 "nvme_io": false, 00:16:19.730 "nvme_io_md": false, 00:16:19.730 "write_zeroes": true, 00:16:19.730 "zcopy": true, 00:16:19.730 "get_zone_info": false, 00:16:19.730 "zone_management": false, 00:16:19.730 "zone_append": false, 00:16:19.730 "compare": false, 00:16:19.730 "compare_and_write": false, 00:16:19.730 "abort": true, 00:16:19.730 "seek_hole": false, 00:16:19.730 "seek_data": false, 00:16:19.730 "copy": true, 00:16:19.730 "nvme_iov_md": false 00:16:19.730 }, 00:16:19.730 "memory_domains": [ 00:16:19.730 { 00:16:19.730 "dma_device_id": "system", 00:16:19.730 "dma_device_type": 1 00:16:19.730 }, 00:16:19.730 { 00:16:19.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.730 "dma_device_type": 2 00:16:19.731 } 00:16:19.731 ], 00:16:19.731 "driver_specific": { 00:16:19.731 "passthru": { 00:16:19.731 "name": "pt1", 00:16:19.731 "base_bdev_name": "malloc1" 00:16:19.731 } 00:16:19.731 } 00:16:19.731 }' 00:16:19.731 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.731 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.731 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:19.731 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.731 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.989 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:19.989 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.989 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.989 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.989 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.989 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:20.248 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:20.248 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:20.248 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:20.248 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:20.248 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:20.248 "name": "pt2", 00:16:20.248 "aliases": [ 00:16:20.248 "00000000-0000-0000-0000-000000000002" 00:16:20.248 ], 00:16:20.248 "product_name": "passthru", 00:16:20.248 "block_size": 512, 00:16:20.248 "num_blocks": 65536, 00:16:20.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.248 "assigned_rate_limits": { 00:16:20.248 "rw_ios_per_sec": 0, 00:16:20.248 "rw_mbytes_per_sec": 0, 00:16:20.248 "r_mbytes_per_sec": 0, 00:16:20.248 "w_mbytes_per_sec": 0 00:16:20.248 }, 00:16:20.248 "claimed": true, 00:16:20.248 "claim_type": "exclusive_write", 00:16:20.248 "zoned": false, 00:16:20.248 "supported_io_types": { 00:16:20.248 "read": true, 00:16:20.248 "write": true, 00:16:20.248 "unmap": true, 00:16:20.248 "flush": true, 00:16:20.248 "reset": true, 00:16:20.248 "nvme_admin": false, 00:16:20.248 "nvme_io": false, 00:16:20.248 "nvme_io_md": false, 00:16:20.248 "write_zeroes": true, 00:16:20.248 "zcopy": true, 00:16:20.248 "get_zone_info": false, 00:16:20.248 "zone_management": false, 00:16:20.248 "zone_append": false, 00:16:20.248 "compare": false, 00:16:20.248 "compare_and_write": false, 00:16:20.248 "abort": true, 00:16:20.248 "seek_hole": false, 00:16:20.248 "seek_data": false, 00:16:20.248 "copy": true, 00:16:20.248 "nvme_iov_md": false 00:16:20.248 }, 00:16:20.248 "memory_domains": [ 00:16:20.248 { 00:16:20.248 "dma_device_id": "system", 00:16:20.248 "dma_device_type": 1 00:16:20.248 }, 00:16:20.248 { 00:16:20.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.248 "dma_device_type": 2 00:16:20.248 } 00:16:20.248 ], 00:16:20.248 "driver_specific": { 00:16:20.248 "passthru": { 00:16:20.248 "name": "pt2", 00:16:20.248 "base_bdev_name": "malloc2" 00:16:20.248 } 00:16:20.248 } 00:16:20.248 }' 00:16:20.248 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.506 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.506 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:20.506 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.506 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.506 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:20.506 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:20.764 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:20.764 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:20.764 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:20.764 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:20.764 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:20.764 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:20.764 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:21.022 [2024-07-12 08:42:56.143807] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.022 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a12359a7-b523-4973-8b4f-a10578dd28a6 00:16:21.023 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a12359a7-b523-4973-8b4f-a10578dd28a6 ']' 00:16:21.023 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:21.281 [2024-07-12 08:42:56.431569] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.281 [2024-07-12 08:42:56.431619] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.281 [2024-07-12 08:42:56.431729] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.281 [2024-07-12 08:42:56.431794] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.281 [2024-07-12 08:42:56.431807] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:16:21.281 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.281 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:21.539 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:21.540 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:21.540 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.540 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:21.815 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.815 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:22.127 08:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:22.127 08:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:22.385 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:22.642 [2024-07-12 08:42:57.771847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:22.642 [2024-07-12 08:42:57.774041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:22.642 [2024-07-12 08:42:57.774132] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:22.642 [2024-07-12 08:42:57.774239] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:22.642 [2024-07-12 08:42:57.774278] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.642 [2024-07-12 08:42:57.774289] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:16:22.642 request: 00:16:22.642 { 00:16:22.642 "name": "raid_bdev1", 00:16:22.642 "raid_level": "raid0", 00:16:22.642 "base_bdevs": [ 00:16:22.642 "malloc1", 00:16:22.642 "malloc2" 00:16:22.642 ], 00:16:22.642 "strip_size_kb": 64, 00:16:22.642 "superblock": false, 00:16:22.642 "method": "bdev_raid_create", 00:16:22.642 "req_id": 1 00:16:22.642 } 00:16:22.642 Got JSON-RPC error response 00:16:22.642 response: 00:16:22.642 { 00:16:22.642 "code": -17, 00:16:22.642 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:22.642 } 00:16:22.642 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:22.642 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:22.642 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:22.642 08:42:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:22.642 08:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.643 08:42:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:22.901 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:22.901 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:22.901 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.159 [2024-07-12 08:42:58.283930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.159 [2024-07-12 08:42:58.284066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.159 [2024-07-12 08:42:58.284106] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:23.159 [2024-07-12 08:42:58.284137] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.159 [2024-07-12 08:42:58.286769] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.159 [2024-07-12 08:42:58.286856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.159 [2024-07-12 08:42:58.286985] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:23.159 [2024-07-12 08:42:58.287073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.159 pt1 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.159 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.417 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.417 "name": "raid_bdev1", 00:16:23.417 "uuid": "a12359a7-b523-4973-8b4f-a10578dd28a6", 00:16:23.417 "strip_size_kb": 64, 00:16:23.417 "state": "configuring", 00:16:23.417 "raid_level": "raid0", 00:16:23.417 "superblock": true, 00:16:23.417 "num_base_bdevs": 2, 00:16:23.417 "num_base_bdevs_discovered": 1, 00:16:23.417 "num_base_bdevs_operational": 2, 00:16:23.417 "base_bdevs_list": [ 00:16:23.417 { 00:16:23.417 "name": "pt1", 00:16:23.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.417 "is_configured": true, 00:16:23.417 "data_offset": 2048, 00:16:23.417 "data_size": 63488 00:16:23.417 }, 00:16:23.417 { 00:16:23.417 "name": null, 00:16:23.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.417 "is_configured": false, 00:16:23.417 "data_offset": 2048, 00:16:23.417 "data_size": 63488 00:16:23.417 } 00:16:23.417 ] 00:16:23.417 }' 00:16:23.418 08:42:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.418 08:42:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:24.351 [2024-07-12 08:42:59.460167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:24.351 [2024-07-12 08:42:59.460528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.351 [2024-07-12 08:42:59.460683] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:24.351 [2024-07-12 08:42:59.460828] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.351 [2024-07-12 08:42:59.461418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.351 [2024-07-12 08:42:59.461597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:24.351 [2024-07-12 08:42:59.461821] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:24.351 [2024-07-12 08:42:59.461964] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.351 [2024-07-12 08:42:59.462215] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:16:24.351 [2024-07-12 08:42:59.462335] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:24.351 [2024-07-12 08:42:59.462490] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:24.351 [2024-07-12 08:42:59.462908] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:16:24.351 [2024-07-12 08:42:59.463050] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:16:24.351 [2024-07-12 08:42:59.463303] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.351 pt2 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.351 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.352 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.609 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.609 "name": "raid_bdev1", 00:16:24.609 "uuid": "a12359a7-b523-4973-8b4f-a10578dd28a6", 00:16:24.609 "strip_size_kb": 64, 00:16:24.609 "state": "online", 00:16:24.609 "raid_level": "raid0", 00:16:24.609 "superblock": true, 00:16:24.609 "num_base_bdevs": 2, 00:16:24.609 "num_base_bdevs_discovered": 2, 00:16:24.609 "num_base_bdevs_operational": 2, 00:16:24.609 "base_bdevs_list": [ 00:16:24.609 { 00:16:24.609 "name": "pt1", 00:16:24.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.609 "is_configured": true, 00:16:24.609 "data_offset": 2048, 00:16:24.609 "data_size": 63488 00:16:24.609 }, 00:16:24.609 { 00:16:24.609 "name": "pt2", 00:16:24.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.609 "is_configured": true, 00:16:24.609 "data_offset": 2048, 00:16:24.609 "data_size": 63488 00:16:24.609 } 00:16:24.609 ] 00:16:24.609 }' 00:16:24.610 08:42:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.610 08:42:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:25.543 [2024-07-12 08:43:00.700784] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.543 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:25.543 "name": "raid_bdev1", 00:16:25.543 "aliases": [ 00:16:25.543 "a12359a7-b523-4973-8b4f-a10578dd28a6" 00:16:25.543 ], 00:16:25.543 "product_name": "Raid Volume", 00:16:25.543 "block_size": 512, 00:16:25.543 "num_blocks": 126976, 00:16:25.543 "uuid": "a12359a7-b523-4973-8b4f-a10578dd28a6", 00:16:25.543 "assigned_rate_limits": { 00:16:25.543 "rw_ios_per_sec": 0, 00:16:25.543 "rw_mbytes_per_sec": 0, 00:16:25.543 "r_mbytes_per_sec": 0, 00:16:25.543 "w_mbytes_per_sec": 0 00:16:25.543 }, 00:16:25.543 "claimed": false, 00:16:25.543 "zoned": false, 00:16:25.543 "supported_io_types": { 00:16:25.543 "read": true, 00:16:25.543 "write": true, 00:16:25.543 "unmap": true, 00:16:25.543 "flush": true, 00:16:25.543 "reset": true, 00:16:25.543 "nvme_admin": false, 00:16:25.543 "nvme_io": false, 00:16:25.543 "nvme_io_md": false, 00:16:25.543 "write_zeroes": true, 00:16:25.543 "zcopy": false, 00:16:25.543 "get_zone_info": false, 00:16:25.543 "zone_management": false, 00:16:25.543 "zone_append": false, 00:16:25.543 "compare": false, 00:16:25.543 "compare_and_write": false, 00:16:25.543 "abort": false, 00:16:25.543 "seek_hole": false, 00:16:25.543 "seek_data": false, 00:16:25.543 "copy": false, 00:16:25.543 "nvme_iov_md": false 00:16:25.543 }, 00:16:25.543 "memory_domains": [ 00:16:25.543 { 00:16:25.543 "dma_device_id": "system", 00:16:25.544 "dma_device_type": 1 00:16:25.544 }, 00:16:25.544 { 00:16:25.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.544 "dma_device_type": 2 00:16:25.544 }, 00:16:25.544 { 00:16:25.544 "dma_device_id": "system", 00:16:25.544 "dma_device_type": 1 00:16:25.544 }, 00:16:25.544 { 00:16:25.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.544 "dma_device_type": 2 00:16:25.544 } 00:16:25.544 ], 00:16:25.544 "driver_specific": { 00:16:25.544 "raid": { 00:16:25.544 "uuid": "a12359a7-b523-4973-8b4f-a10578dd28a6", 00:16:25.544 "strip_size_kb": 64, 00:16:25.544 "state": "online", 00:16:25.544 "raid_level": "raid0", 00:16:25.544 "superblock": true, 00:16:25.544 "num_base_bdevs": 2, 00:16:25.544 "num_base_bdevs_discovered": 2, 00:16:25.544 "num_base_bdevs_operational": 2, 00:16:25.544 "base_bdevs_list": [ 00:16:25.544 { 00:16:25.544 "name": "pt1", 00:16:25.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.544 "is_configured": true, 00:16:25.544 "data_offset": 2048, 00:16:25.544 "data_size": 63488 00:16:25.544 }, 00:16:25.544 { 00:16:25.544 "name": "pt2", 00:16:25.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.544 "is_configured": true, 00:16:25.544 "data_offset": 2048, 00:16:25.544 "data_size": 63488 00:16:25.544 } 00:16:25.544 ] 00:16:25.544 } 00:16:25.544 } 00:16:25.544 }' 00:16:25.544 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.802 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:25.802 pt2' 00:16:25.802 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.802 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:25.802 08:43:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.063 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.063 "name": "pt1", 00:16:26.063 "aliases": [ 00:16:26.063 "00000000-0000-0000-0000-000000000001" 00:16:26.063 ], 00:16:26.063 "product_name": "passthru", 00:16:26.063 "block_size": 512, 00:16:26.063 "num_blocks": 65536, 00:16:26.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.063 "assigned_rate_limits": { 00:16:26.063 "rw_ios_per_sec": 0, 00:16:26.063 "rw_mbytes_per_sec": 0, 00:16:26.063 "r_mbytes_per_sec": 0, 00:16:26.063 "w_mbytes_per_sec": 0 00:16:26.063 }, 00:16:26.063 "claimed": true, 00:16:26.063 "claim_type": "exclusive_write", 00:16:26.063 "zoned": false, 00:16:26.063 "supported_io_types": { 00:16:26.063 "read": true, 00:16:26.063 "write": true, 00:16:26.063 "unmap": true, 00:16:26.063 "flush": true, 00:16:26.063 "reset": true, 00:16:26.063 "nvme_admin": false, 00:16:26.063 "nvme_io": false, 00:16:26.063 "nvme_io_md": false, 00:16:26.063 "write_zeroes": true, 00:16:26.063 "zcopy": true, 00:16:26.063 "get_zone_info": false, 00:16:26.063 "zone_management": false, 00:16:26.063 "zone_append": false, 00:16:26.063 "compare": false, 00:16:26.063 "compare_and_write": false, 00:16:26.063 "abort": true, 00:16:26.063 "seek_hole": false, 00:16:26.063 "seek_data": false, 00:16:26.063 "copy": true, 00:16:26.063 "nvme_iov_md": false 00:16:26.063 }, 00:16:26.063 "memory_domains": [ 00:16:26.063 { 00:16:26.063 "dma_device_id": "system", 00:16:26.063 "dma_device_type": 1 00:16:26.063 }, 00:16:26.063 { 00:16:26.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.063 "dma_device_type": 2 00:16:26.063 } 00:16:26.063 ], 00:16:26.063 "driver_specific": { 00:16:26.063 "passthru": { 00:16:26.063 "name": "pt1", 00:16:26.063 "base_bdev_name": "malloc1" 00:16:26.063 } 00:16:26.063 } 00:16:26.063 }' 00:16:26.063 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.063 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.063 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.063 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.063 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.323 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.323 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.323 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.323 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.323 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.323 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.580 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.580 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:26.580 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:26.580 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.838 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.838 "name": "pt2", 00:16:26.838 "aliases": [ 00:16:26.838 "00000000-0000-0000-0000-000000000002" 00:16:26.838 ], 00:16:26.838 "product_name": "passthru", 00:16:26.838 "block_size": 512, 00:16:26.838 "num_blocks": 65536, 00:16:26.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.838 "assigned_rate_limits": { 00:16:26.838 "rw_ios_per_sec": 0, 00:16:26.838 "rw_mbytes_per_sec": 0, 00:16:26.838 "r_mbytes_per_sec": 0, 00:16:26.838 "w_mbytes_per_sec": 0 00:16:26.838 }, 00:16:26.838 "claimed": true, 00:16:26.838 "claim_type": "exclusive_write", 00:16:26.838 "zoned": false, 00:16:26.838 "supported_io_types": { 00:16:26.838 "read": true, 00:16:26.838 "write": true, 00:16:26.838 "unmap": true, 00:16:26.838 "flush": true, 00:16:26.838 "reset": true, 00:16:26.838 "nvme_admin": false, 00:16:26.838 "nvme_io": false, 00:16:26.838 "nvme_io_md": false, 00:16:26.838 "write_zeroes": true, 00:16:26.838 "zcopy": true, 00:16:26.838 "get_zone_info": false, 00:16:26.838 "zone_management": false, 00:16:26.838 "zone_append": false, 00:16:26.838 "compare": false, 00:16:26.838 "compare_and_write": false, 00:16:26.838 "abort": true, 00:16:26.838 "seek_hole": false, 00:16:26.838 "seek_data": false, 00:16:26.838 "copy": true, 00:16:26.838 "nvme_iov_md": false 00:16:26.838 }, 00:16:26.838 "memory_domains": [ 00:16:26.838 { 00:16:26.838 "dma_device_id": "system", 00:16:26.838 "dma_device_type": 1 00:16:26.838 }, 00:16:26.838 { 00:16:26.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.838 "dma_device_type": 2 00:16:26.838 } 00:16:26.838 ], 00:16:26.838 "driver_specific": { 00:16:26.838 "passthru": { 00:16:26.838 "name": "pt2", 00:16:26.838 "base_bdev_name": "malloc2" 00:16:26.838 } 00:16:26.838 } 00:16:26.838 }' 00:16:26.838 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.838 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.838 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.838 08:43:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:27.095 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:27.095 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:27.095 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:27.095 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:27.095 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:27.095 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:27.352 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:27.352 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:27.352 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:27.352 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:27.609 [2024-07-12 08:43:02.621298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a12359a7-b523-4973-8b4f-a10578dd28a6 '!=' a12359a7-b523-4973-8b4f-a10578dd28a6 ']' 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 121631 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 121631 ']' 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 121631 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121631 00:16:27.609 killing process with pid 121631 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121631' 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 121631 00:16:27.609 08:43:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 121631 00:16:27.609 [2024-07-12 08:43:02.661161] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.609 [2024-07-12 08:43:02.661272] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.609 [2024-07-12 08:43:02.661332] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.609 [2024-07-12 08:43:02.661344] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:16:27.866 [2024-07-12 08:43:02.832990] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.798 ************************************ 00:16:28.798 END TEST raid_superblock_test 00:16:28.798 ************************************ 00:16:28.798 08:43:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:28.798 00:16:28.798 real 0m13.198s 00:16:28.798 user 0m23.773s 00:16:28.798 sys 0m1.386s 00:16:28.798 08:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.798 08:43:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 08:43:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:29.086 08:43:04 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:16:29.086 08:43:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:29.086 08:43:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.086 08:43:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 ************************************ 00:16:29.086 START TEST raid_read_error_test 00:16:29.086 ************************************ 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.omueOE0mp6 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122039 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122039 /var/tmp/spdk-raid.sock 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 122039 ']' 00:16:29.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.086 08:43:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 [2024-07-12 08:43:04.102068] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:16:29.086 [2024-07-12 08:43:04.102282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122039 ] 00:16:29.086 [2024-07-12 08:43:04.273144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.348 [2024-07-12 08:43:04.533103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.606 [2024-07-12 08:43:04.741790] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.172 08:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.172 08:43:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:30.172 08:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:30.172 08:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:30.172 BaseBdev1_malloc 00:16:30.172 08:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:30.429 true 00:16:30.687 08:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:30.944 [2024-07-12 08:43:05.895904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:30.944 [2024-07-12 08:43:05.896041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.944 [2024-07-12 08:43:05.896090] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.944 [2024-07-12 08:43:05.896113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.944 [2024-07-12 08:43:05.898805] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.944 [2024-07-12 08:43:05.898865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.944 BaseBdev1 00:16:30.944 08:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:30.944 08:43:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:31.202 BaseBdev2_malloc 00:16:31.202 08:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:31.460 true 00:16:31.460 08:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:31.717 [2024-07-12 08:43:06.763253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:31.717 [2024-07-12 08:43:06.763389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.717 [2024-07-12 08:43:06.763440] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:31.717 [2024-07-12 08:43:06.763464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.717 [2024-07-12 08:43:06.766103] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.717 [2024-07-12 08:43:06.766162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:31.717 BaseBdev2 00:16:31.717 08:43:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:31.974 [2024-07-12 08:43:07.055374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.974 [2024-07-12 08:43:07.057651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.974 [2024-07-12 08:43:07.057984] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:31.974 [2024-07-12 08:43:07.058005] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:31.974 [2024-07-12 08:43:07.058162] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:31.974 [2024-07-12 08:43:07.058612] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:31.974 [2024-07-12 08:43:07.058639] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:31.974 [2024-07-12 08:43:07.058823] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.974 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:31.974 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:31.974 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.974 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:31.974 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.974 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.974 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.975 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.975 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.975 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.975 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.975 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.232 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.232 "name": "raid_bdev1", 00:16:32.232 "uuid": "8b2f0523-40be-4dbc-9542-e7f82fb5a83a", 00:16:32.232 "strip_size_kb": 64, 00:16:32.232 "state": "online", 00:16:32.232 "raid_level": "raid0", 00:16:32.232 "superblock": true, 00:16:32.232 "num_base_bdevs": 2, 00:16:32.232 "num_base_bdevs_discovered": 2, 00:16:32.232 "num_base_bdevs_operational": 2, 00:16:32.232 "base_bdevs_list": [ 00:16:32.232 { 00:16:32.232 "name": "BaseBdev1", 00:16:32.232 "uuid": "e1e9e231-1cf9-5bf2-b49c-a08b1ee9149a", 00:16:32.232 "is_configured": true, 00:16:32.232 "data_offset": 2048, 00:16:32.232 "data_size": 63488 00:16:32.232 }, 00:16:32.232 { 00:16:32.232 "name": "BaseBdev2", 00:16:32.232 "uuid": "949fb6fd-5382-58e8-b4fb-aba16a010c6f", 00:16:32.232 "is_configured": true, 00:16:32.232 "data_offset": 2048, 00:16:32.232 "data_size": 63488 00:16:32.232 } 00:16:32.232 ] 00:16:32.232 }' 00:16:32.232 08:43:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.232 08:43:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.189 08:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:33.189 08:43:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:33.189 [2024-07-12 08:43:08.244982] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:34.121 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.380 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.638 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.638 "name": "raid_bdev1", 00:16:34.638 "uuid": "8b2f0523-40be-4dbc-9542-e7f82fb5a83a", 00:16:34.638 "strip_size_kb": 64, 00:16:34.638 "state": "online", 00:16:34.638 "raid_level": "raid0", 00:16:34.638 "superblock": true, 00:16:34.638 "num_base_bdevs": 2, 00:16:34.638 "num_base_bdevs_discovered": 2, 00:16:34.638 "num_base_bdevs_operational": 2, 00:16:34.638 "base_bdevs_list": [ 00:16:34.638 { 00:16:34.638 "name": "BaseBdev1", 00:16:34.638 "uuid": "e1e9e231-1cf9-5bf2-b49c-a08b1ee9149a", 00:16:34.638 "is_configured": true, 00:16:34.638 "data_offset": 2048, 00:16:34.638 "data_size": 63488 00:16:34.638 }, 00:16:34.638 { 00:16:34.638 "name": "BaseBdev2", 00:16:34.638 "uuid": "949fb6fd-5382-58e8-b4fb-aba16a010c6f", 00:16:34.638 "is_configured": true, 00:16:34.638 "data_offset": 2048, 00:16:34.638 "data_size": 63488 00:16:34.638 } 00:16:34.638 ] 00:16:34.638 }' 00:16:34.638 08:43:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.638 08:43:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.572 08:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:35.572 [2024-07-12 08:43:10.730396] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.572 [2024-07-12 08:43:10.730465] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.572 [2024-07-12 08:43:10.733520] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.572 [2024-07-12 08:43:10.733601] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.572 [2024-07-12 08:43:10.733639] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.572 [2024-07-12 08:43:10.733652] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:35.572 0 00:16:35.572 08:43:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122039 00:16:35.572 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 122039 ']' 00:16:35.572 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 122039 00:16:35.572 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:35.572 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.572 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122039 00:16:35.830 killing process with pid 122039 00:16:35.830 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:35.830 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:35.830 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122039' 00:16:35.830 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 122039 00:16:35.830 08:43:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 122039 00:16:35.830 [2024-07-12 08:43:10.765836] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.830 [2024-07-12 08:43:10.873768] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.omueOE0mp6 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.40 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.40 != \0\.\0\0 ]] 00:16:37.203 00:16:37.203 real 0m8.043s 00:16:37.203 user 0m12.512s 00:16:37.203 sys 0m0.828s 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.203 08:43:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.203 ************************************ 00:16:37.203 END TEST raid_read_error_test 00:16:37.203 ************************************ 00:16:37.203 08:43:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:37.203 08:43:12 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:16:37.203 08:43:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:37.203 08:43:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.203 08:43:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.203 ************************************ 00:16:37.203 START TEST raid_write_error_test 00:16:37.203 ************************************ 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.pvM2iJxBNK 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=122262 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 122262 /var/tmp/spdk-raid.sock 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 122262 ']' 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.203 08:43:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:37.203 [2024-07-12 08:43:12.193240] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:16:37.203 [2024-07-12 08:43:12.193466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122262 ] 00:16:37.203 [2024-07-12 08:43:12.354941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.498 [2024-07-12 08:43:12.637640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.774 [2024-07-12 08:43:12.835129] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.030 08:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.030 08:43:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:38.030 08:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:38.030 08:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:38.594 BaseBdev1_malloc 00:16:38.594 08:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:38.852 true 00:16:38.852 08:43:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:39.109 [2024-07-12 08:43:14.172443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:39.109 [2024-07-12 08:43:14.172586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.109 [2024-07-12 08:43:14.172631] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:39.109 [2024-07-12 08:43:14.172653] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.109 [2024-07-12 08:43:14.175300] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.109 [2024-07-12 08:43:14.175370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.109 BaseBdev1 00:16:39.109 08:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:39.109 08:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:39.365 BaseBdev2_malloc 00:16:39.365 08:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:39.623 true 00:16:39.623 08:43:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:39.880 [2024-07-12 08:43:15.030550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:39.880 [2024-07-12 08:43:15.030732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.880 [2024-07-12 08:43:15.030782] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:39.880 [2024-07-12 08:43:15.030822] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.880 [2024-07-12 08:43:15.033222] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.880 [2024-07-12 08:43:15.033290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:39.880 BaseBdev2 00:16:39.880 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:40.138 [2024-07-12 08:43:15.254702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.138 [2024-07-12 08:43:15.256793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.138 [2024-07-12 08:43:15.257102] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:40.138 [2024-07-12 08:43:15.257128] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:40.138 [2024-07-12 08:43:15.257264] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:40.138 [2024-07-12 08:43:15.257684] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:40.138 [2024-07-12 08:43:15.257708] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:40.138 [2024-07-12 08:43:15.257904] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.138 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.395 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.395 "name": "raid_bdev1", 00:16:40.395 "uuid": "dd7ad60e-1752-4a4c-b2fd-d77f389d1a9b", 00:16:40.395 "strip_size_kb": 64, 00:16:40.395 "state": "online", 00:16:40.395 "raid_level": "raid0", 00:16:40.395 "superblock": true, 00:16:40.395 "num_base_bdevs": 2, 00:16:40.395 "num_base_bdevs_discovered": 2, 00:16:40.395 "num_base_bdevs_operational": 2, 00:16:40.395 "base_bdevs_list": [ 00:16:40.395 { 00:16:40.395 "name": "BaseBdev1", 00:16:40.395 "uuid": "1d91d3bf-8b66-570e-85da-b8114ea51d84", 00:16:40.395 "is_configured": true, 00:16:40.395 "data_offset": 2048, 00:16:40.395 "data_size": 63488 00:16:40.395 }, 00:16:40.395 { 00:16:40.395 "name": "BaseBdev2", 00:16:40.395 "uuid": "650fbc82-5c21-5bda-9fb5-38afdc55edb9", 00:16:40.395 "is_configured": true, 00:16:40.395 "data_offset": 2048, 00:16:40.395 "data_size": 63488 00:16:40.395 } 00:16:40.395 ] 00:16:40.395 }' 00:16:40.395 08:43:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.395 08:43:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.330 08:43:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:41.330 08:43:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:41.330 [2024-07-12 08:43:16.324173] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:42.266 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.525 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.783 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.783 "name": "raid_bdev1", 00:16:42.783 "uuid": "dd7ad60e-1752-4a4c-b2fd-d77f389d1a9b", 00:16:42.783 "strip_size_kb": 64, 00:16:42.783 "state": "online", 00:16:42.783 "raid_level": "raid0", 00:16:42.783 "superblock": true, 00:16:42.783 "num_base_bdevs": 2, 00:16:42.783 "num_base_bdevs_discovered": 2, 00:16:42.783 "num_base_bdevs_operational": 2, 00:16:42.783 "base_bdevs_list": [ 00:16:42.783 { 00:16:42.783 "name": "BaseBdev1", 00:16:42.783 "uuid": "1d91d3bf-8b66-570e-85da-b8114ea51d84", 00:16:42.783 "is_configured": true, 00:16:42.783 "data_offset": 2048, 00:16:42.783 "data_size": 63488 00:16:42.783 }, 00:16:42.783 { 00:16:42.783 "name": "BaseBdev2", 00:16:42.783 "uuid": "650fbc82-5c21-5bda-9fb5-38afdc55edb9", 00:16:42.783 "is_configured": true, 00:16:42.783 "data_offset": 2048, 00:16:42.783 "data_size": 63488 00:16:42.783 } 00:16:42.783 ] 00:16:42.783 }' 00:16:42.783 08:43:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.783 08:43:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.350 08:43:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:43.608 [2024-07-12 08:43:18.780125] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.608 [2024-07-12 08:43:18.780179] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.608 [2024-07-12 08:43:18.783255] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.608 [2024-07-12 08:43:18.783315] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.608 [2024-07-12 08:43:18.783355] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.608 [2024-07-12 08:43:18.783366] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:43.608 0 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 122262 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 122262 ']' 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 122262 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122262 00:16:43.867 killing process with pid 122262 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122262' 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 122262 00:16:43.867 08:43:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 122262 00:16:43.867 [2024-07-12 08:43:18.821010] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.867 [2024-07-12 08:43:18.934088] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.pvM2iJxBNK 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:45.241 ************************************ 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:45.241 08:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:16:45.241 00:16:45.241 real 0m8.021s 00:16:45.242 user 0m12.403s 00:16:45.242 sys 0m0.882s 00:16:45.242 08:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.242 08:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 END TEST raid_write_error_test 00:16:45.242 ************************************ 00:16:45.242 08:43:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:45.242 08:43:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:45.242 08:43:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:16:45.242 08:43:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:45.242 08:43:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.242 08:43:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 ************************************ 00:16:45.242 START TEST raid_state_function_test 00:16:45.242 ************************************ 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=122455 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 122455' 00:16:45.242 Process raid pid: 122455 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 122455 /var/tmp/spdk-raid.sock 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 122455 ']' 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:45.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.242 08:43:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 [2024-07-12 08:43:20.268722] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:16:45.242 [2024-07-12 08:43:20.269572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.500 [2024-07-12 08:43:20.440930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.758 [2024-07-12 08:43:20.718769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.758 [2024-07-12 08:43:20.922025] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.320 08:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.320 08:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:46.320 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:46.577 [2024-07-12 08:43:21.541135] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.577 [2024-07-12 08:43:21.541538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.577 [2024-07-12 08:43:21.541702] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.577 [2024-07-12 08:43:21.541801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.577 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.835 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.835 "name": "Existed_Raid", 00:16:46.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.835 "strip_size_kb": 64, 00:16:46.835 "state": "configuring", 00:16:46.835 "raid_level": "concat", 00:16:46.835 "superblock": false, 00:16:46.835 "num_base_bdevs": 2, 00:16:46.835 "num_base_bdevs_discovered": 0, 00:16:46.835 "num_base_bdevs_operational": 2, 00:16:46.835 "base_bdevs_list": [ 00:16:46.835 { 00:16:46.835 "name": "BaseBdev1", 00:16:46.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.835 "is_configured": false, 00:16:46.835 "data_offset": 0, 00:16:46.835 "data_size": 0 00:16:46.835 }, 00:16:46.835 { 00:16:46.835 "name": "BaseBdev2", 00:16:46.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.835 "is_configured": false, 00:16:46.835 "data_offset": 0, 00:16:46.835 "data_size": 0 00:16:46.835 } 00:16:46.835 ] 00:16:46.835 }' 00:16:46.835 08:43:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.835 08:43:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.399 08:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.657 [2024-07-12 08:43:22.845138] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.657 [2024-07-12 08:43:22.845377] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:47.914 08:43:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:48.171 [2024-07-12 08:43:23.117282] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.171 [2024-07-12 08:43:23.117543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.171 [2024-07-12 08:43:23.117648] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.171 [2024-07-12 08:43:23.117714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.171 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:48.429 [2024-07-12 08:43:23.444831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.429 BaseBdev1 00:16:48.429 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:48.429 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:48.429 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.429 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:48.429 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.429 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.429 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.688 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.946 [ 00:16:48.946 { 00:16:48.946 "name": "BaseBdev1", 00:16:48.946 "aliases": [ 00:16:48.946 "cd7a9805-e3c1-4323-9fa0-3ed386cb687b" 00:16:48.946 ], 00:16:48.946 "product_name": "Malloc disk", 00:16:48.946 "block_size": 512, 00:16:48.946 "num_blocks": 65536, 00:16:48.946 "uuid": "cd7a9805-e3c1-4323-9fa0-3ed386cb687b", 00:16:48.946 "assigned_rate_limits": { 00:16:48.946 "rw_ios_per_sec": 0, 00:16:48.946 "rw_mbytes_per_sec": 0, 00:16:48.946 "r_mbytes_per_sec": 0, 00:16:48.946 "w_mbytes_per_sec": 0 00:16:48.946 }, 00:16:48.946 "claimed": true, 00:16:48.946 "claim_type": "exclusive_write", 00:16:48.946 "zoned": false, 00:16:48.946 "supported_io_types": { 00:16:48.946 "read": true, 00:16:48.946 "write": true, 00:16:48.946 "unmap": true, 00:16:48.946 "flush": true, 00:16:48.946 "reset": true, 00:16:48.946 "nvme_admin": false, 00:16:48.946 "nvme_io": false, 00:16:48.946 "nvme_io_md": false, 00:16:48.946 "write_zeroes": true, 00:16:48.946 "zcopy": true, 00:16:48.946 "get_zone_info": false, 00:16:48.946 "zone_management": false, 00:16:48.946 "zone_append": false, 00:16:48.946 "compare": false, 00:16:48.946 "compare_and_write": false, 00:16:48.946 "abort": true, 00:16:48.946 "seek_hole": false, 00:16:48.946 "seek_data": false, 00:16:48.946 "copy": true, 00:16:48.946 "nvme_iov_md": false 00:16:48.946 }, 00:16:48.946 "memory_domains": [ 00:16:48.946 { 00:16:48.946 "dma_device_id": "system", 00:16:48.946 "dma_device_type": 1 00:16:48.946 }, 00:16:48.946 { 00:16:48.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.946 "dma_device_type": 2 00:16:48.946 } 00:16:48.946 ], 00:16:48.946 "driver_specific": {} 00:16:48.946 } 00:16:48.946 ] 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.946 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.947 08:43:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.205 08:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.205 "name": "Existed_Raid", 00:16:49.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.205 "strip_size_kb": 64, 00:16:49.205 "state": "configuring", 00:16:49.205 "raid_level": "concat", 00:16:49.205 "superblock": false, 00:16:49.205 "num_base_bdevs": 2, 00:16:49.205 "num_base_bdevs_discovered": 1, 00:16:49.205 "num_base_bdevs_operational": 2, 00:16:49.205 "base_bdevs_list": [ 00:16:49.205 { 00:16:49.205 "name": "BaseBdev1", 00:16:49.205 "uuid": "cd7a9805-e3c1-4323-9fa0-3ed386cb687b", 00:16:49.205 "is_configured": true, 00:16:49.205 "data_offset": 0, 00:16:49.205 "data_size": 65536 00:16:49.205 }, 00:16:49.205 { 00:16:49.205 "name": "BaseBdev2", 00:16:49.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.205 "is_configured": false, 00:16:49.205 "data_offset": 0, 00:16:49.205 "data_size": 0 00:16:49.205 } 00:16:49.205 ] 00:16:49.205 }' 00:16:49.205 08:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.205 08:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.139 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:50.396 [2024-07-12 08:43:25.368420] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.396 [2024-07-12 08:43:25.368616] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:16:50.396 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:50.654 [2024-07-12 08:43:25.608507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.654 [2024-07-12 08:43:25.610870] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.654 [2024-07-12 08:43:25.611066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.654 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.911 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.911 "name": "Existed_Raid", 00:16:50.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.911 "strip_size_kb": 64, 00:16:50.911 "state": "configuring", 00:16:50.911 "raid_level": "concat", 00:16:50.911 "superblock": false, 00:16:50.911 "num_base_bdevs": 2, 00:16:50.911 "num_base_bdevs_discovered": 1, 00:16:50.911 "num_base_bdevs_operational": 2, 00:16:50.911 "base_bdevs_list": [ 00:16:50.911 { 00:16:50.911 "name": "BaseBdev1", 00:16:50.911 "uuid": "cd7a9805-e3c1-4323-9fa0-3ed386cb687b", 00:16:50.911 "is_configured": true, 00:16:50.911 "data_offset": 0, 00:16:50.911 "data_size": 65536 00:16:50.911 }, 00:16:50.911 { 00:16:50.911 "name": "BaseBdev2", 00:16:50.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.911 "is_configured": false, 00:16:50.911 "data_offset": 0, 00:16:50.911 "data_size": 0 00:16:50.911 } 00:16:50.911 ] 00:16:50.911 }' 00:16:50.911 08:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.911 08:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.477 08:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.043 [2024-07-12 08:43:26.987231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.043 [2024-07-12 08:43:26.987307] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:52.043 [2024-07-12 08:43:26.987331] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:52.043 [2024-07-12 08:43:26.987480] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:52.043 [2024-07-12 08:43:26.987835] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:52.043 [2024-07-12 08:43:26.987862] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:52.043 [2024-07-12 08:43:26.988152] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.043 BaseBdev2 00:16:52.043 08:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:52.043 08:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:52.043 08:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:52.043 08:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:52.043 08:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:52.043 08:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:52.043 08:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.301 08:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.559 [ 00:16:52.559 { 00:16:52.559 "name": "BaseBdev2", 00:16:52.559 "aliases": [ 00:16:52.559 "73d9acdd-6df3-4e10-9890-02132fb4f1a9" 00:16:52.559 ], 00:16:52.559 "product_name": "Malloc disk", 00:16:52.559 "block_size": 512, 00:16:52.559 "num_blocks": 65536, 00:16:52.559 "uuid": "73d9acdd-6df3-4e10-9890-02132fb4f1a9", 00:16:52.559 "assigned_rate_limits": { 00:16:52.559 "rw_ios_per_sec": 0, 00:16:52.559 "rw_mbytes_per_sec": 0, 00:16:52.559 "r_mbytes_per_sec": 0, 00:16:52.559 "w_mbytes_per_sec": 0 00:16:52.559 }, 00:16:52.559 "claimed": true, 00:16:52.559 "claim_type": "exclusive_write", 00:16:52.559 "zoned": false, 00:16:52.559 "supported_io_types": { 00:16:52.559 "read": true, 00:16:52.559 "write": true, 00:16:52.559 "unmap": true, 00:16:52.559 "flush": true, 00:16:52.559 "reset": true, 00:16:52.559 "nvme_admin": false, 00:16:52.559 "nvme_io": false, 00:16:52.559 "nvme_io_md": false, 00:16:52.559 "write_zeroes": true, 00:16:52.559 "zcopy": true, 00:16:52.559 "get_zone_info": false, 00:16:52.559 "zone_management": false, 00:16:52.559 "zone_append": false, 00:16:52.559 "compare": false, 00:16:52.559 "compare_and_write": false, 00:16:52.559 "abort": true, 00:16:52.559 "seek_hole": false, 00:16:52.559 "seek_data": false, 00:16:52.559 "copy": true, 00:16:52.559 "nvme_iov_md": false 00:16:52.559 }, 00:16:52.559 "memory_domains": [ 00:16:52.559 { 00:16:52.559 "dma_device_id": "system", 00:16:52.559 "dma_device_type": 1 00:16:52.559 }, 00:16:52.559 { 00:16:52.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.559 "dma_device_type": 2 00:16:52.559 } 00:16:52.559 ], 00:16:52.559 "driver_specific": {} 00:16:52.559 } 00:16:52.559 ] 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.559 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.817 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.817 "name": "Existed_Raid", 00:16:52.817 "uuid": "497afb01-c5ab-40ca-97a6-ddc0fd96184c", 00:16:52.817 "strip_size_kb": 64, 00:16:52.817 "state": "online", 00:16:52.817 "raid_level": "concat", 00:16:52.817 "superblock": false, 00:16:52.817 "num_base_bdevs": 2, 00:16:52.817 "num_base_bdevs_discovered": 2, 00:16:52.817 "num_base_bdevs_operational": 2, 00:16:52.817 "base_bdevs_list": [ 00:16:52.817 { 00:16:52.817 "name": "BaseBdev1", 00:16:52.817 "uuid": "cd7a9805-e3c1-4323-9fa0-3ed386cb687b", 00:16:52.817 "is_configured": true, 00:16:52.817 "data_offset": 0, 00:16:52.817 "data_size": 65536 00:16:52.817 }, 00:16:52.817 { 00:16:52.817 "name": "BaseBdev2", 00:16:52.817 "uuid": "73d9acdd-6df3-4e10-9890-02132fb4f1a9", 00:16:52.817 "is_configured": true, 00:16:52.817 "data_offset": 0, 00:16:52.817 "data_size": 65536 00:16:52.817 } 00:16:52.817 ] 00:16:52.817 }' 00:16:52.817 08:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.817 08:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:53.383 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:53.641 [2024-07-12 08:43:28.724053] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.641 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:53.641 "name": "Existed_Raid", 00:16:53.641 "aliases": [ 00:16:53.641 "497afb01-c5ab-40ca-97a6-ddc0fd96184c" 00:16:53.641 ], 00:16:53.641 "product_name": "Raid Volume", 00:16:53.641 "block_size": 512, 00:16:53.641 "num_blocks": 131072, 00:16:53.641 "uuid": "497afb01-c5ab-40ca-97a6-ddc0fd96184c", 00:16:53.641 "assigned_rate_limits": { 00:16:53.641 "rw_ios_per_sec": 0, 00:16:53.641 "rw_mbytes_per_sec": 0, 00:16:53.641 "r_mbytes_per_sec": 0, 00:16:53.642 "w_mbytes_per_sec": 0 00:16:53.642 }, 00:16:53.642 "claimed": false, 00:16:53.642 "zoned": false, 00:16:53.642 "supported_io_types": { 00:16:53.642 "read": true, 00:16:53.642 "write": true, 00:16:53.642 "unmap": true, 00:16:53.642 "flush": true, 00:16:53.642 "reset": true, 00:16:53.642 "nvme_admin": false, 00:16:53.642 "nvme_io": false, 00:16:53.642 "nvme_io_md": false, 00:16:53.642 "write_zeroes": true, 00:16:53.642 "zcopy": false, 00:16:53.642 "get_zone_info": false, 00:16:53.642 "zone_management": false, 00:16:53.642 "zone_append": false, 00:16:53.642 "compare": false, 00:16:53.642 "compare_and_write": false, 00:16:53.642 "abort": false, 00:16:53.642 "seek_hole": false, 00:16:53.642 "seek_data": false, 00:16:53.642 "copy": false, 00:16:53.642 "nvme_iov_md": false 00:16:53.642 }, 00:16:53.642 "memory_domains": [ 00:16:53.642 { 00:16:53.642 "dma_device_id": "system", 00:16:53.642 "dma_device_type": 1 00:16:53.642 }, 00:16:53.642 { 00:16:53.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.642 "dma_device_type": 2 00:16:53.642 }, 00:16:53.642 { 00:16:53.642 "dma_device_id": "system", 00:16:53.642 "dma_device_type": 1 00:16:53.642 }, 00:16:53.642 { 00:16:53.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.642 "dma_device_type": 2 00:16:53.642 } 00:16:53.642 ], 00:16:53.642 "driver_specific": { 00:16:53.642 "raid": { 00:16:53.642 "uuid": "497afb01-c5ab-40ca-97a6-ddc0fd96184c", 00:16:53.642 "strip_size_kb": 64, 00:16:53.642 "state": "online", 00:16:53.642 "raid_level": "concat", 00:16:53.642 "superblock": false, 00:16:53.642 "num_base_bdevs": 2, 00:16:53.642 "num_base_bdevs_discovered": 2, 00:16:53.642 "num_base_bdevs_operational": 2, 00:16:53.642 "base_bdevs_list": [ 00:16:53.642 { 00:16:53.642 "name": "BaseBdev1", 00:16:53.642 "uuid": "cd7a9805-e3c1-4323-9fa0-3ed386cb687b", 00:16:53.642 "is_configured": true, 00:16:53.642 "data_offset": 0, 00:16:53.642 "data_size": 65536 00:16:53.642 }, 00:16:53.642 { 00:16:53.642 "name": "BaseBdev2", 00:16:53.642 "uuid": "73d9acdd-6df3-4e10-9890-02132fb4f1a9", 00:16:53.642 "is_configured": true, 00:16:53.642 "data_offset": 0, 00:16:53.642 "data_size": 65536 00:16:53.642 } 00:16:53.642 ] 00:16:53.642 } 00:16:53.642 } 00:16:53.642 }' 00:16:53.642 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.642 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:53.642 BaseBdev2' 00:16:53.642 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.642 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:53.642 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.900 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.900 "name": "BaseBdev1", 00:16:53.900 "aliases": [ 00:16:53.900 "cd7a9805-e3c1-4323-9fa0-3ed386cb687b" 00:16:53.900 ], 00:16:53.900 "product_name": "Malloc disk", 00:16:53.900 "block_size": 512, 00:16:53.900 "num_blocks": 65536, 00:16:53.900 "uuid": "cd7a9805-e3c1-4323-9fa0-3ed386cb687b", 00:16:53.900 "assigned_rate_limits": { 00:16:53.900 "rw_ios_per_sec": 0, 00:16:53.900 "rw_mbytes_per_sec": 0, 00:16:53.900 "r_mbytes_per_sec": 0, 00:16:53.900 "w_mbytes_per_sec": 0 00:16:53.900 }, 00:16:53.900 "claimed": true, 00:16:53.900 "claim_type": "exclusive_write", 00:16:53.900 "zoned": false, 00:16:53.900 "supported_io_types": { 00:16:53.900 "read": true, 00:16:53.900 "write": true, 00:16:53.900 "unmap": true, 00:16:53.900 "flush": true, 00:16:53.900 "reset": true, 00:16:53.900 "nvme_admin": false, 00:16:53.900 "nvme_io": false, 00:16:53.900 "nvme_io_md": false, 00:16:53.900 "write_zeroes": true, 00:16:53.900 "zcopy": true, 00:16:53.900 "get_zone_info": false, 00:16:53.900 "zone_management": false, 00:16:53.900 "zone_append": false, 00:16:53.900 "compare": false, 00:16:53.900 "compare_and_write": false, 00:16:53.900 "abort": true, 00:16:53.900 "seek_hole": false, 00:16:53.900 "seek_data": false, 00:16:53.900 "copy": true, 00:16:53.900 "nvme_iov_md": false 00:16:53.900 }, 00:16:53.900 "memory_domains": [ 00:16:53.900 { 00:16:53.900 "dma_device_id": "system", 00:16:53.900 "dma_device_type": 1 00:16:53.900 }, 00:16:53.900 { 00:16:53.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.900 "dma_device_type": 2 00:16:53.900 } 00:16:53.900 ], 00:16:53.900 "driver_specific": {} 00:16:53.900 }' 00:16:53.900 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.157 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.157 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.157 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.157 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.157 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.157 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.157 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.415 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.415 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.415 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.415 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.415 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.415 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:54.415 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.673 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.673 "name": "BaseBdev2", 00:16:54.673 "aliases": [ 00:16:54.673 "73d9acdd-6df3-4e10-9890-02132fb4f1a9" 00:16:54.673 ], 00:16:54.673 "product_name": "Malloc disk", 00:16:54.673 "block_size": 512, 00:16:54.673 "num_blocks": 65536, 00:16:54.673 "uuid": "73d9acdd-6df3-4e10-9890-02132fb4f1a9", 00:16:54.673 "assigned_rate_limits": { 00:16:54.673 "rw_ios_per_sec": 0, 00:16:54.673 "rw_mbytes_per_sec": 0, 00:16:54.673 "r_mbytes_per_sec": 0, 00:16:54.673 "w_mbytes_per_sec": 0 00:16:54.673 }, 00:16:54.673 "claimed": true, 00:16:54.673 "claim_type": "exclusive_write", 00:16:54.673 "zoned": false, 00:16:54.673 "supported_io_types": { 00:16:54.673 "read": true, 00:16:54.673 "write": true, 00:16:54.673 "unmap": true, 00:16:54.673 "flush": true, 00:16:54.673 "reset": true, 00:16:54.673 "nvme_admin": false, 00:16:54.673 "nvme_io": false, 00:16:54.673 "nvme_io_md": false, 00:16:54.673 "write_zeroes": true, 00:16:54.673 "zcopy": true, 00:16:54.673 "get_zone_info": false, 00:16:54.673 "zone_management": false, 00:16:54.673 "zone_append": false, 00:16:54.673 "compare": false, 00:16:54.673 "compare_and_write": false, 00:16:54.673 "abort": true, 00:16:54.673 "seek_hole": false, 00:16:54.673 "seek_data": false, 00:16:54.673 "copy": true, 00:16:54.673 "nvme_iov_md": false 00:16:54.673 }, 00:16:54.673 "memory_domains": [ 00:16:54.673 { 00:16:54.673 "dma_device_id": "system", 00:16:54.673 "dma_device_type": 1 00:16:54.673 }, 00:16:54.673 { 00:16:54.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.673 "dma_device_type": 2 00:16:54.673 } 00:16:54.673 ], 00:16:54.673 "driver_specific": {} 00:16:54.673 }' 00:16:54.673 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.673 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.931 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.931 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.931 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.931 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.931 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.931 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.191 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.191 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.191 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.191 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.191 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:55.452 [2024-07-12 08:43:30.536299] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.452 [2024-07-12 08:43:30.536350] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.452 [2024-07-12 08:43:30.536424] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.452 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:55.452 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:55.452 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:55.452 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:55.452 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:55.452 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:55.452 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.453 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.018 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.018 "name": "Existed_Raid", 00:16:56.018 "uuid": "497afb01-c5ab-40ca-97a6-ddc0fd96184c", 00:16:56.018 "strip_size_kb": 64, 00:16:56.018 "state": "offline", 00:16:56.018 "raid_level": "concat", 00:16:56.018 "superblock": false, 00:16:56.018 "num_base_bdevs": 2, 00:16:56.018 "num_base_bdevs_discovered": 1, 00:16:56.018 "num_base_bdevs_operational": 1, 00:16:56.018 "base_bdevs_list": [ 00:16:56.018 { 00:16:56.018 "name": null, 00:16:56.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.018 "is_configured": false, 00:16:56.018 "data_offset": 0, 00:16:56.018 "data_size": 65536 00:16:56.018 }, 00:16:56.018 { 00:16:56.018 "name": "BaseBdev2", 00:16:56.018 "uuid": "73d9acdd-6df3-4e10-9890-02132fb4f1a9", 00:16:56.018 "is_configured": true, 00:16:56.018 "data_offset": 0, 00:16:56.018 "data_size": 65536 00:16:56.018 } 00:16:56.018 ] 00:16:56.018 }' 00:16:56.018 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.018 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.583 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:56.583 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:56.583 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.583 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:56.842 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:56.842 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.842 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:57.099 [2024-07-12 08:43:32.164668] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.099 [2024-07-12 08:43:32.164758] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:57.099 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:57.099 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:57.099 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.099 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 122455 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 122455 ']' 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 122455 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122455 00:16:57.360 killing process with pid 122455 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122455' 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 122455 00:16:57.360 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 122455 00:16:57.361 [2024-07-12 08:43:32.536725] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.361 [2024-07-12 08:43:32.536853] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.734 ************************************ 00:16:58.734 END TEST raid_state_function_test 00:16:58.734 ************************************ 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:58.734 00:16:58.734 real 0m13.450s 00:16:58.734 user 0m24.176s 00:16:58.734 sys 0m1.418s 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.734 08:43:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:58.734 08:43:33 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:16:58.734 08:43:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:58.734 08:43:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.734 08:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.734 ************************************ 00:16:58.734 START TEST raid_state_function_test_sb 00:16:58.734 ************************************ 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:58.734 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=122894 00:16:58.735 Process raid pid: 122894 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 122894' 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 122894 /var/tmp/spdk-raid.sock 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 122894 ']' 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:58.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.735 08:43:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:58.735 [2024-07-12 08:43:33.781398] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:16:58.735 [2024-07-12 08:43:33.781952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.992 [2024-07-12 08:43:33.960848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.249 [2024-07-12 08:43:34.208572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.249 [2024-07-12 08:43:34.411305] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.813 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.813 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:59.813 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:59.813 [2024-07-12 08:43:34.935018] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:59.813 [2024-07-12 08:43:34.935133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:59.813 [2024-07-12 08:43:34.935149] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:59.813 [2024-07-12 08:43:34.935180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:59.813 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:59.813 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:59.813 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.814 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.076 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.076 "name": "Existed_Raid", 00:17:00.076 "uuid": "8f3279bf-9aca-4550-a316-a2a15ce5b408", 00:17:00.076 "strip_size_kb": 64, 00:17:00.076 "state": "configuring", 00:17:00.076 "raid_level": "concat", 00:17:00.076 "superblock": true, 00:17:00.076 "num_base_bdevs": 2, 00:17:00.076 "num_base_bdevs_discovered": 0, 00:17:00.076 "num_base_bdevs_operational": 2, 00:17:00.076 "base_bdevs_list": [ 00:17:00.076 { 00:17:00.076 "name": "BaseBdev1", 00:17:00.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.076 "is_configured": false, 00:17:00.076 "data_offset": 0, 00:17:00.076 "data_size": 0 00:17:00.076 }, 00:17:00.076 { 00:17:00.076 "name": "BaseBdev2", 00:17:00.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.076 "is_configured": false, 00:17:00.076 "data_offset": 0, 00:17:00.076 "data_size": 0 00:17:00.076 } 00:17:00.076 ] 00:17:00.076 }' 00:17:00.076 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.076 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.022 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:01.022 [2024-07-12 08:43:36.143179] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.022 [2024-07-12 08:43:36.143237] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:01.022 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:01.312 [2024-07-12 08:43:36.375287] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.312 [2024-07-12 08:43:36.375374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.312 [2024-07-12 08:43:36.375388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.312 [2024-07-12 08:43:36.375416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.312 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:01.570 [2024-07-12 08:43:36.656090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.570 BaseBdev1 00:17:01.570 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:01.570 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:01.570 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:01.570 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:01.570 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:01.570 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:01.570 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:01.828 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.085 [ 00:17:02.085 { 00:17:02.085 "name": "BaseBdev1", 00:17:02.085 "aliases": [ 00:17:02.085 "dd1cea73-a619-4f08-a316-70e2147a74f9" 00:17:02.085 ], 00:17:02.085 "product_name": "Malloc disk", 00:17:02.085 "block_size": 512, 00:17:02.085 "num_blocks": 65536, 00:17:02.085 "uuid": "dd1cea73-a619-4f08-a316-70e2147a74f9", 00:17:02.085 "assigned_rate_limits": { 00:17:02.085 "rw_ios_per_sec": 0, 00:17:02.085 "rw_mbytes_per_sec": 0, 00:17:02.085 "r_mbytes_per_sec": 0, 00:17:02.085 "w_mbytes_per_sec": 0 00:17:02.085 }, 00:17:02.085 "claimed": true, 00:17:02.085 "claim_type": "exclusive_write", 00:17:02.085 "zoned": false, 00:17:02.085 "supported_io_types": { 00:17:02.085 "read": true, 00:17:02.085 "write": true, 00:17:02.085 "unmap": true, 00:17:02.085 "flush": true, 00:17:02.085 "reset": true, 00:17:02.085 "nvme_admin": false, 00:17:02.085 "nvme_io": false, 00:17:02.085 "nvme_io_md": false, 00:17:02.085 "write_zeroes": true, 00:17:02.085 "zcopy": true, 00:17:02.085 "get_zone_info": false, 00:17:02.085 "zone_management": false, 00:17:02.085 "zone_append": false, 00:17:02.085 "compare": false, 00:17:02.085 "compare_and_write": false, 00:17:02.085 "abort": true, 00:17:02.085 "seek_hole": false, 00:17:02.085 "seek_data": false, 00:17:02.085 "copy": true, 00:17:02.085 "nvme_iov_md": false 00:17:02.085 }, 00:17:02.085 "memory_domains": [ 00:17:02.085 { 00:17:02.085 "dma_device_id": "system", 00:17:02.085 "dma_device_type": 1 00:17:02.085 }, 00:17:02.085 { 00:17:02.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.085 "dma_device_type": 2 00:17:02.085 } 00:17:02.085 ], 00:17:02.085 "driver_specific": {} 00:17:02.085 } 00:17:02.085 ] 00:17:02.085 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:02.085 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:02.085 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.085 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.086 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.342 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.342 "name": "Existed_Raid", 00:17:02.342 "uuid": "cf6c840e-a436-4647-b8b6-f69819d47576", 00:17:02.342 "strip_size_kb": 64, 00:17:02.342 "state": "configuring", 00:17:02.342 "raid_level": "concat", 00:17:02.342 "superblock": true, 00:17:02.342 "num_base_bdevs": 2, 00:17:02.342 "num_base_bdevs_discovered": 1, 00:17:02.342 "num_base_bdevs_operational": 2, 00:17:02.342 "base_bdevs_list": [ 00:17:02.342 { 00:17:02.342 "name": "BaseBdev1", 00:17:02.342 "uuid": "dd1cea73-a619-4f08-a316-70e2147a74f9", 00:17:02.342 "is_configured": true, 00:17:02.342 "data_offset": 2048, 00:17:02.342 "data_size": 63488 00:17:02.342 }, 00:17:02.342 { 00:17:02.342 "name": "BaseBdev2", 00:17:02.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.342 "is_configured": false, 00:17:02.342 "data_offset": 0, 00:17:02.342 "data_size": 0 00:17:02.342 } 00:17:02.342 ] 00:17:02.342 }' 00:17:02.342 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.342 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.275 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:03.533 [2024-07-12 08:43:38.472595] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.533 [2024-07-12 08:43:38.472668] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:17:03.533 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:03.791 [2024-07-12 08:43:38.752709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.791 [2024-07-12 08:43:38.754933] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.791 [2024-07-12 08:43:38.755013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.791 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.048 08:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.048 "name": "Existed_Raid", 00:17:04.048 "uuid": "42facfaa-b1c6-4b3b-8753-d997dbe31307", 00:17:04.048 "strip_size_kb": 64, 00:17:04.048 "state": "configuring", 00:17:04.048 "raid_level": "concat", 00:17:04.048 "superblock": true, 00:17:04.048 "num_base_bdevs": 2, 00:17:04.048 "num_base_bdevs_discovered": 1, 00:17:04.048 "num_base_bdevs_operational": 2, 00:17:04.048 "base_bdevs_list": [ 00:17:04.048 { 00:17:04.048 "name": "BaseBdev1", 00:17:04.048 "uuid": "dd1cea73-a619-4f08-a316-70e2147a74f9", 00:17:04.048 "is_configured": true, 00:17:04.048 "data_offset": 2048, 00:17:04.048 "data_size": 63488 00:17:04.048 }, 00:17:04.048 { 00:17:04.048 "name": "BaseBdev2", 00:17:04.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.048 "is_configured": false, 00:17:04.048 "data_offset": 0, 00:17:04.048 "data_size": 0 00:17:04.048 } 00:17:04.048 ] 00:17:04.048 }' 00:17:04.048 08:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.048 08:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.669 08:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.926 [2024-07-12 08:43:40.032029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.926 [2024-07-12 08:43:40.032314] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:04.926 [2024-07-12 08:43:40.032332] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:04.926 [2024-07-12 08:43:40.032477] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:04.926 BaseBdev2 00:17:04.926 [2024-07-12 08:43:40.032825] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:04.926 [2024-07-12 08:43:40.032853] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:04.926 [2024-07-12 08:43:40.033011] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.926 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:04.926 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:04.926 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:04.926 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:04.926 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:04.926 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:04.926 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.184 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.441 [ 00:17:05.441 { 00:17:05.441 "name": "BaseBdev2", 00:17:05.441 "aliases": [ 00:17:05.441 "eb2b9cf3-4305-4eb1-bf3c-d308a14a8945" 00:17:05.441 ], 00:17:05.441 "product_name": "Malloc disk", 00:17:05.441 "block_size": 512, 00:17:05.441 "num_blocks": 65536, 00:17:05.441 "uuid": "eb2b9cf3-4305-4eb1-bf3c-d308a14a8945", 00:17:05.441 "assigned_rate_limits": { 00:17:05.441 "rw_ios_per_sec": 0, 00:17:05.441 "rw_mbytes_per_sec": 0, 00:17:05.441 "r_mbytes_per_sec": 0, 00:17:05.441 "w_mbytes_per_sec": 0 00:17:05.441 }, 00:17:05.441 "claimed": true, 00:17:05.441 "claim_type": "exclusive_write", 00:17:05.441 "zoned": false, 00:17:05.441 "supported_io_types": { 00:17:05.441 "read": true, 00:17:05.441 "write": true, 00:17:05.441 "unmap": true, 00:17:05.441 "flush": true, 00:17:05.441 "reset": true, 00:17:05.441 "nvme_admin": false, 00:17:05.441 "nvme_io": false, 00:17:05.441 "nvme_io_md": false, 00:17:05.441 "write_zeroes": true, 00:17:05.441 "zcopy": true, 00:17:05.441 "get_zone_info": false, 00:17:05.441 "zone_management": false, 00:17:05.441 "zone_append": false, 00:17:05.441 "compare": false, 00:17:05.441 "compare_and_write": false, 00:17:05.441 "abort": true, 00:17:05.441 "seek_hole": false, 00:17:05.441 "seek_data": false, 00:17:05.441 "copy": true, 00:17:05.441 "nvme_iov_md": false 00:17:05.441 }, 00:17:05.441 "memory_domains": [ 00:17:05.441 { 00:17:05.441 "dma_device_id": "system", 00:17:05.441 "dma_device_type": 1 00:17:05.441 }, 00:17:05.441 { 00:17:05.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.441 "dma_device_type": 2 00:17:05.441 } 00:17:05.441 ], 00:17:05.441 "driver_specific": {} 00:17:05.441 } 00:17:05.441 ] 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.441 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.698 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.698 "name": "Existed_Raid", 00:17:05.698 "uuid": "42facfaa-b1c6-4b3b-8753-d997dbe31307", 00:17:05.698 "strip_size_kb": 64, 00:17:05.698 "state": "online", 00:17:05.698 "raid_level": "concat", 00:17:05.698 "superblock": true, 00:17:05.698 "num_base_bdevs": 2, 00:17:05.698 "num_base_bdevs_discovered": 2, 00:17:05.698 "num_base_bdevs_operational": 2, 00:17:05.698 "base_bdevs_list": [ 00:17:05.698 { 00:17:05.698 "name": "BaseBdev1", 00:17:05.698 "uuid": "dd1cea73-a619-4f08-a316-70e2147a74f9", 00:17:05.698 "is_configured": true, 00:17:05.698 "data_offset": 2048, 00:17:05.698 "data_size": 63488 00:17:05.698 }, 00:17:05.698 { 00:17:05.698 "name": "BaseBdev2", 00:17:05.698 "uuid": "eb2b9cf3-4305-4eb1-bf3c-d308a14a8945", 00:17:05.698 "is_configured": true, 00:17:05.698 "data_offset": 2048, 00:17:05.698 "data_size": 63488 00:17:05.698 } 00:17:05.698 ] 00:17:05.698 }' 00:17:05.698 08:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.698 08:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:06.263 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:06.520 [2024-07-12 08:43:41.640805] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.520 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:06.520 "name": "Existed_Raid", 00:17:06.520 "aliases": [ 00:17:06.520 "42facfaa-b1c6-4b3b-8753-d997dbe31307" 00:17:06.520 ], 00:17:06.520 "product_name": "Raid Volume", 00:17:06.520 "block_size": 512, 00:17:06.520 "num_blocks": 126976, 00:17:06.520 "uuid": "42facfaa-b1c6-4b3b-8753-d997dbe31307", 00:17:06.520 "assigned_rate_limits": { 00:17:06.520 "rw_ios_per_sec": 0, 00:17:06.520 "rw_mbytes_per_sec": 0, 00:17:06.520 "r_mbytes_per_sec": 0, 00:17:06.520 "w_mbytes_per_sec": 0 00:17:06.520 }, 00:17:06.520 "claimed": false, 00:17:06.520 "zoned": false, 00:17:06.520 "supported_io_types": { 00:17:06.520 "read": true, 00:17:06.520 "write": true, 00:17:06.520 "unmap": true, 00:17:06.520 "flush": true, 00:17:06.520 "reset": true, 00:17:06.520 "nvme_admin": false, 00:17:06.521 "nvme_io": false, 00:17:06.521 "nvme_io_md": false, 00:17:06.521 "write_zeroes": true, 00:17:06.521 "zcopy": false, 00:17:06.521 "get_zone_info": false, 00:17:06.521 "zone_management": false, 00:17:06.521 "zone_append": false, 00:17:06.521 "compare": false, 00:17:06.521 "compare_and_write": false, 00:17:06.521 "abort": false, 00:17:06.521 "seek_hole": false, 00:17:06.521 "seek_data": false, 00:17:06.521 "copy": false, 00:17:06.521 "nvme_iov_md": false 00:17:06.521 }, 00:17:06.521 "memory_domains": [ 00:17:06.521 { 00:17:06.521 "dma_device_id": "system", 00:17:06.521 "dma_device_type": 1 00:17:06.521 }, 00:17:06.521 { 00:17:06.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.521 "dma_device_type": 2 00:17:06.521 }, 00:17:06.521 { 00:17:06.521 "dma_device_id": "system", 00:17:06.521 "dma_device_type": 1 00:17:06.521 }, 00:17:06.521 { 00:17:06.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.521 "dma_device_type": 2 00:17:06.521 } 00:17:06.521 ], 00:17:06.521 "driver_specific": { 00:17:06.521 "raid": { 00:17:06.521 "uuid": "42facfaa-b1c6-4b3b-8753-d997dbe31307", 00:17:06.521 "strip_size_kb": 64, 00:17:06.521 "state": "online", 00:17:06.521 "raid_level": "concat", 00:17:06.521 "superblock": true, 00:17:06.521 "num_base_bdevs": 2, 00:17:06.521 "num_base_bdevs_discovered": 2, 00:17:06.521 "num_base_bdevs_operational": 2, 00:17:06.521 "base_bdevs_list": [ 00:17:06.521 { 00:17:06.521 "name": "BaseBdev1", 00:17:06.521 "uuid": "dd1cea73-a619-4f08-a316-70e2147a74f9", 00:17:06.521 "is_configured": true, 00:17:06.521 "data_offset": 2048, 00:17:06.521 "data_size": 63488 00:17:06.521 }, 00:17:06.521 { 00:17:06.521 "name": "BaseBdev2", 00:17:06.521 "uuid": "eb2b9cf3-4305-4eb1-bf3c-d308a14a8945", 00:17:06.521 "is_configured": true, 00:17:06.521 "data_offset": 2048, 00:17:06.521 "data_size": 63488 00:17:06.521 } 00:17:06.521 ] 00:17:06.521 } 00:17:06.521 } 00:17:06.521 }' 00:17:06.521 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.779 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:06.779 BaseBdev2' 00:17:06.779 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.779 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:06.779 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.779 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.779 "name": "BaseBdev1", 00:17:06.779 "aliases": [ 00:17:06.779 "dd1cea73-a619-4f08-a316-70e2147a74f9" 00:17:06.779 ], 00:17:06.779 "product_name": "Malloc disk", 00:17:06.779 "block_size": 512, 00:17:06.779 "num_blocks": 65536, 00:17:06.779 "uuid": "dd1cea73-a619-4f08-a316-70e2147a74f9", 00:17:06.779 "assigned_rate_limits": { 00:17:06.779 "rw_ios_per_sec": 0, 00:17:06.779 "rw_mbytes_per_sec": 0, 00:17:06.779 "r_mbytes_per_sec": 0, 00:17:06.779 "w_mbytes_per_sec": 0 00:17:06.779 }, 00:17:06.779 "claimed": true, 00:17:06.779 "claim_type": "exclusive_write", 00:17:06.779 "zoned": false, 00:17:06.779 "supported_io_types": { 00:17:06.779 "read": true, 00:17:06.779 "write": true, 00:17:06.779 "unmap": true, 00:17:06.779 "flush": true, 00:17:06.779 "reset": true, 00:17:06.779 "nvme_admin": false, 00:17:06.779 "nvme_io": false, 00:17:06.779 "nvme_io_md": false, 00:17:06.779 "write_zeroes": true, 00:17:06.779 "zcopy": true, 00:17:06.779 "get_zone_info": false, 00:17:06.779 "zone_management": false, 00:17:06.779 "zone_append": false, 00:17:06.779 "compare": false, 00:17:06.779 "compare_and_write": false, 00:17:06.779 "abort": true, 00:17:06.779 "seek_hole": false, 00:17:06.779 "seek_data": false, 00:17:06.779 "copy": true, 00:17:06.779 "nvme_iov_md": false 00:17:06.779 }, 00:17:06.779 "memory_domains": [ 00:17:06.779 { 00:17:06.779 "dma_device_id": "system", 00:17:06.779 "dma_device_type": 1 00:17:06.779 }, 00:17:06.779 { 00:17:06.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.779 "dma_device_type": 2 00:17:06.779 } 00:17:06.779 ], 00:17:06.779 "driver_specific": {} 00:17:06.779 }' 00:17:06.779 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.036 08:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.036 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:07.036 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.036 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.036 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.036 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.036 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.294 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.294 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.294 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.294 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.294 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:07.294 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:07.294 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:07.553 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:07.553 "name": "BaseBdev2", 00:17:07.553 "aliases": [ 00:17:07.553 "eb2b9cf3-4305-4eb1-bf3c-d308a14a8945" 00:17:07.553 ], 00:17:07.553 "product_name": "Malloc disk", 00:17:07.553 "block_size": 512, 00:17:07.553 "num_blocks": 65536, 00:17:07.553 "uuid": "eb2b9cf3-4305-4eb1-bf3c-d308a14a8945", 00:17:07.553 "assigned_rate_limits": { 00:17:07.553 "rw_ios_per_sec": 0, 00:17:07.553 "rw_mbytes_per_sec": 0, 00:17:07.553 "r_mbytes_per_sec": 0, 00:17:07.553 "w_mbytes_per_sec": 0 00:17:07.553 }, 00:17:07.553 "claimed": true, 00:17:07.553 "claim_type": "exclusive_write", 00:17:07.553 "zoned": false, 00:17:07.553 "supported_io_types": { 00:17:07.553 "read": true, 00:17:07.553 "write": true, 00:17:07.553 "unmap": true, 00:17:07.553 "flush": true, 00:17:07.553 "reset": true, 00:17:07.553 "nvme_admin": false, 00:17:07.553 "nvme_io": false, 00:17:07.553 "nvme_io_md": false, 00:17:07.553 "write_zeroes": true, 00:17:07.553 "zcopy": true, 00:17:07.553 "get_zone_info": false, 00:17:07.553 "zone_management": false, 00:17:07.553 "zone_append": false, 00:17:07.553 "compare": false, 00:17:07.553 "compare_and_write": false, 00:17:07.553 "abort": true, 00:17:07.553 "seek_hole": false, 00:17:07.553 "seek_data": false, 00:17:07.553 "copy": true, 00:17:07.553 "nvme_iov_md": false 00:17:07.553 }, 00:17:07.553 "memory_domains": [ 00:17:07.553 { 00:17:07.553 "dma_device_id": "system", 00:17:07.553 "dma_device_type": 1 00:17:07.553 }, 00:17:07.553 { 00:17:07.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.553 "dma_device_type": 2 00:17:07.553 } 00:17:07.553 ], 00:17:07.553 "driver_specific": {} 00:17:07.553 }' 00:17:07.553 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.553 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.553 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:07.553 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.811 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.811 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.811 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.811 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.811 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.811 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.811 08:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.069 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:08.069 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:08.327 [2024-07-12 08:43:43.305030] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.327 [2024-07-12 08:43:43.305083] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.327 [2024-07-12 08:43:43.305155] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.327 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.636 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.636 "name": "Existed_Raid", 00:17:08.636 "uuid": "42facfaa-b1c6-4b3b-8753-d997dbe31307", 00:17:08.636 "strip_size_kb": 64, 00:17:08.636 "state": "offline", 00:17:08.636 "raid_level": "concat", 00:17:08.636 "superblock": true, 00:17:08.636 "num_base_bdevs": 2, 00:17:08.636 "num_base_bdevs_discovered": 1, 00:17:08.636 "num_base_bdevs_operational": 1, 00:17:08.636 "base_bdevs_list": [ 00:17:08.636 { 00:17:08.636 "name": null, 00:17:08.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.636 "is_configured": false, 00:17:08.636 "data_offset": 2048, 00:17:08.636 "data_size": 63488 00:17:08.636 }, 00:17:08.636 { 00:17:08.636 "name": "BaseBdev2", 00:17:08.636 "uuid": "eb2b9cf3-4305-4eb1-bf3c-d308a14a8945", 00:17:08.636 "is_configured": true, 00:17:08.636 "data_offset": 2048, 00:17:08.636 "data_size": 63488 00:17:08.636 } 00:17:08.636 ] 00:17:08.636 }' 00:17:08.636 08:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.636 08:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.202 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:09.202 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:09.202 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.202 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:09.460 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:09.460 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.460 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:09.718 [2024-07-12 08:43:44.866364] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.718 [2024-07-12 08:43:44.866456] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:09.976 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:09.976 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:09.976 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.976 08:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 122894 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 122894 ']' 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 122894 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122894 00:17:10.234 killing process with pid 122894 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122894' 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 122894 00:17:10.234 08:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 122894 00:17:10.234 [2024-07-12 08:43:45.253972] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.234 [2024-07-12 08:43:45.254094] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.609 ************************************ 00:17:11.609 END TEST raid_state_function_test_sb 00:17:11.609 ************************************ 00:17:11.609 08:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:11.609 00:17:11.609 real 0m12.672s 00:17:11.609 user 0m22.508s 00:17:11.609 sys 0m1.451s 00:17:11.609 08:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:11.609 08:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.609 08:43:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:11.609 08:43:46 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:17:11.609 08:43:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:11.609 08:43:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.609 08:43:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.609 ************************************ 00:17:11.609 START TEST raid_superblock_test 00:17:11.609 ************************************ 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=123296 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 123296 /var/tmp/spdk-raid.sock 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 123296 ']' 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.609 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.609 [2024-07-12 08:43:46.495163] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:17:11.609 [2024-07-12 08:43:46.495410] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123296 ] 00:17:11.609 [2024-07-12 08:43:46.678020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.869 [2024-07-12 08:43:46.890870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.127 [2024-07-12 08:43:47.086516] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.386 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:12.644 malloc1 00:17:12.644 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.904 [2024-07-12 08:43:47.932135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.904 [2024-07-12 08:43:47.932286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.904 [2024-07-12 08:43:47.932332] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:12.904 [2024-07-12 08:43:47.932356] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.904 [2024-07-12 08:43:47.934965] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.904 [2024-07-12 08:43:47.935024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.904 pt1 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.904 08:43:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:13.162 malloc2 00:17:13.162 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.420 [2024-07-12 08:43:48.446035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.420 [2024-07-12 08:43:48.446190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.420 [2024-07-12 08:43:48.446233] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:13.420 [2024-07-12 08:43:48.446256] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.420 [2024-07-12 08:43:48.448741] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.420 [2024-07-12 08:43:48.448796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.420 pt2 00:17:13.420 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:13.420 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:13.420 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:17:13.679 [2024-07-12 08:43:48.674186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.679 [2024-07-12 08:43:48.676426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.679 [2024-07-12 08:43:48.676659] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:17:13.679 [2024-07-12 08:43:48.676684] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:13.679 [2024-07-12 08:43:48.676856] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:13.679 [2024-07-12 08:43:48.677282] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:17:13.679 [2024-07-12 08:43:48.677306] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:17:13.679 [2024-07-12 08:43:48.677491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.679 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.937 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.937 "name": "raid_bdev1", 00:17:13.937 "uuid": "c078ca7d-f2cd-4db7-9866-488dce5304ed", 00:17:13.937 "strip_size_kb": 64, 00:17:13.937 "state": "online", 00:17:13.937 "raid_level": "concat", 00:17:13.937 "superblock": true, 00:17:13.937 "num_base_bdevs": 2, 00:17:13.937 "num_base_bdevs_discovered": 2, 00:17:13.937 "num_base_bdevs_operational": 2, 00:17:13.937 "base_bdevs_list": [ 00:17:13.937 { 00:17:13.937 "name": "pt1", 00:17:13.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.937 "is_configured": true, 00:17:13.937 "data_offset": 2048, 00:17:13.937 "data_size": 63488 00:17:13.937 }, 00:17:13.937 { 00:17:13.937 "name": "pt2", 00:17:13.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.937 "is_configured": true, 00:17:13.937 "data_offset": 2048, 00:17:13.937 "data_size": 63488 00:17:13.937 } 00:17:13.937 ] 00:17:13.937 }' 00:17:13.937 08:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.937 08:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:14.873 [2024-07-12 08:43:49.930729] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:14.873 "name": "raid_bdev1", 00:17:14.873 "aliases": [ 00:17:14.873 "c078ca7d-f2cd-4db7-9866-488dce5304ed" 00:17:14.873 ], 00:17:14.873 "product_name": "Raid Volume", 00:17:14.873 "block_size": 512, 00:17:14.873 "num_blocks": 126976, 00:17:14.873 "uuid": "c078ca7d-f2cd-4db7-9866-488dce5304ed", 00:17:14.873 "assigned_rate_limits": { 00:17:14.873 "rw_ios_per_sec": 0, 00:17:14.873 "rw_mbytes_per_sec": 0, 00:17:14.873 "r_mbytes_per_sec": 0, 00:17:14.873 "w_mbytes_per_sec": 0 00:17:14.873 }, 00:17:14.873 "claimed": false, 00:17:14.873 "zoned": false, 00:17:14.873 "supported_io_types": { 00:17:14.873 "read": true, 00:17:14.873 "write": true, 00:17:14.873 "unmap": true, 00:17:14.873 "flush": true, 00:17:14.873 "reset": true, 00:17:14.873 "nvme_admin": false, 00:17:14.873 "nvme_io": false, 00:17:14.873 "nvme_io_md": false, 00:17:14.873 "write_zeroes": true, 00:17:14.873 "zcopy": false, 00:17:14.873 "get_zone_info": false, 00:17:14.873 "zone_management": false, 00:17:14.873 "zone_append": false, 00:17:14.873 "compare": false, 00:17:14.873 "compare_and_write": false, 00:17:14.873 "abort": false, 00:17:14.873 "seek_hole": false, 00:17:14.873 "seek_data": false, 00:17:14.873 "copy": false, 00:17:14.873 "nvme_iov_md": false 00:17:14.873 }, 00:17:14.873 "memory_domains": [ 00:17:14.873 { 00:17:14.873 "dma_device_id": "system", 00:17:14.873 "dma_device_type": 1 00:17:14.873 }, 00:17:14.873 { 00:17:14.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.873 "dma_device_type": 2 00:17:14.873 }, 00:17:14.873 { 00:17:14.873 "dma_device_id": "system", 00:17:14.873 "dma_device_type": 1 00:17:14.873 }, 00:17:14.873 { 00:17:14.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.873 "dma_device_type": 2 00:17:14.873 } 00:17:14.873 ], 00:17:14.873 "driver_specific": { 00:17:14.873 "raid": { 00:17:14.873 "uuid": "c078ca7d-f2cd-4db7-9866-488dce5304ed", 00:17:14.873 "strip_size_kb": 64, 00:17:14.873 "state": "online", 00:17:14.873 "raid_level": "concat", 00:17:14.873 "superblock": true, 00:17:14.873 "num_base_bdevs": 2, 00:17:14.873 "num_base_bdevs_discovered": 2, 00:17:14.873 "num_base_bdevs_operational": 2, 00:17:14.873 "base_bdevs_list": [ 00:17:14.873 { 00:17:14.873 "name": "pt1", 00:17:14.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.873 "is_configured": true, 00:17:14.873 "data_offset": 2048, 00:17:14.873 "data_size": 63488 00:17:14.873 }, 00:17:14.873 { 00:17:14.873 "name": "pt2", 00:17:14.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.873 "is_configured": true, 00:17:14.873 "data_offset": 2048, 00:17:14.873 "data_size": 63488 00:17:14.873 } 00:17:14.873 ] 00:17:14.873 } 00:17:14.873 } 00:17:14.873 }' 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:14.873 pt2' 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:14.873 08:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.132 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.133 "name": "pt1", 00:17:15.133 "aliases": [ 00:17:15.133 "00000000-0000-0000-0000-000000000001" 00:17:15.133 ], 00:17:15.133 "product_name": "passthru", 00:17:15.133 "block_size": 512, 00:17:15.133 "num_blocks": 65536, 00:17:15.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.133 "assigned_rate_limits": { 00:17:15.133 "rw_ios_per_sec": 0, 00:17:15.133 "rw_mbytes_per_sec": 0, 00:17:15.133 "r_mbytes_per_sec": 0, 00:17:15.133 "w_mbytes_per_sec": 0 00:17:15.133 }, 00:17:15.133 "claimed": true, 00:17:15.133 "claim_type": "exclusive_write", 00:17:15.133 "zoned": false, 00:17:15.133 "supported_io_types": { 00:17:15.133 "read": true, 00:17:15.133 "write": true, 00:17:15.133 "unmap": true, 00:17:15.133 "flush": true, 00:17:15.133 "reset": true, 00:17:15.133 "nvme_admin": false, 00:17:15.133 "nvme_io": false, 00:17:15.133 "nvme_io_md": false, 00:17:15.133 "write_zeroes": true, 00:17:15.133 "zcopy": true, 00:17:15.133 "get_zone_info": false, 00:17:15.133 "zone_management": false, 00:17:15.133 "zone_append": false, 00:17:15.133 "compare": false, 00:17:15.133 "compare_and_write": false, 00:17:15.133 "abort": true, 00:17:15.133 "seek_hole": false, 00:17:15.133 "seek_data": false, 00:17:15.133 "copy": true, 00:17:15.133 "nvme_iov_md": false 00:17:15.133 }, 00:17:15.133 "memory_domains": [ 00:17:15.133 { 00:17:15.133 "dma_device_id": "system", 00:17:15.133 "dma_device_type": 1 00:17:15.133 }, 00:17:15.133 { 00:17:15.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.133 "dma_device_type": 2 00:17:15.133 } 00:17:15.133 ], 00:17:15.133 "driver_specific": { 00:17:15.133 "passthru": { 00:17:15.133 "name": "pt1", 00:17:15.133 "base_bdev_name": "malloc1" 00:17:15.133 } 00:17:15.133 } 00:17:15.133 }' 00:17:15.133 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.133 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.391 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.391 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.391 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.391 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.391 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.392 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.392 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.392 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.650 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.650 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.650 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.650 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:15.650 08:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.909 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.909 "name": "pt2", 00:17:15.909 "aliases": [ 00:17:15.909 "00000000-0000-0000-0000-000000000002" 00:17:15.909 ], 00:17:15.909 "product_name": "passthru", 00:17:15.909 "block_size": 512, 00:17:15.909 "num_blocks": 65536, 00:17:15.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.909 "assigned_rate_limits": { 00:17:15.909 "rw_ios_per_sec": 0, 00:17:15.909 "rw_mbytes_per_sec": 0, 00:17:15.909 "r_mbytes_per_sec": 0, 00:17:15.909 "w_mbytes_per_sec": 0 00:17:15.909 }, 00:17:15.909 "claimed": true, 00:17:15.909 "claim_type": "exclusive_write", 00:17:15.909 "zoned": false, 00:17:15.909 "supported_io_types": { 00:17:15.909 "read": true, 00:17:15.909 "write": true, 00:17:15.909 "unmap": true, 00:17:15.909 "flush": true, 00:17:15.909 "reset": true, 00:17:15.909 "nvme_admin": false, 00:17:15.909 "nvme_io": false, 00:17:15.909 "nvme_io_md": false, 00:17:15.909 "write_zeroes": true, 00:17:15.909 "zcopy": true, 00:17:15.909 "get_zone_info": false, 00:17:15.909 "zone_management": false, 00:17:15.909 "zone_append": false, 00:17:15.909 "compare": false, 00:17:15.909 "compare_and_write": false, 00:17:15.909 "abort": true, 00:17:15.909 "seek_hole": false, 00:17:15.909 "seek_data": false, 00:17:15.909 "copy": true, 00:17:15.909 "nvme_iov_md": false 00:17:15.909 }, 00:17:15.909 "memory_domains": [ 00:17:15.909 { 00:17:15.909 "dma_device_id": "system", 00:17:15.909 "dma_device_type": 1 00:17:15.909 }, 00:17:15.909 { 00:17:15.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.909 "dma_device_type": 2 00:17:15.909 } 00:17:15.909 ], 00:17:15.909 "driver_specific": { 00:17:15.909 "passthru": { 00:17:15.909 "name": "pt2", 00:17:15.909 "base_bdev_name": "malloc2" 00:17:15.909 } 00:17:15.909 } 00:17:15.909 }' 00:17:15.909 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.168 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.168 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:16.168 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.168 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.168 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:16.168 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.168 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.426 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:16.426 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.426 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.426 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:16.426 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:16.426 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:16.684 [2024-07-12 08:43:51.783089] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.684 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c078ca7d-f2cd-4db7-9866-488dce5304ed 00:17:16.684 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c078ca7d-f2cd-4db7-9866-488dce5304ed ']' 00:17:16.684 08:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:16.942 [2024-07-12 08:43:52.018827] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.942 [2024-07-12 08:43:52.018888] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.942 [2024-07-12 08:43:52.019016] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.942 [2024-07-12 08:43:52.019081] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.942 [2024-07-12 08:43:52.019095] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:17:16.942 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:16.942 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.202 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:17.202 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:17.202 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.202 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:17.460 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.460 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:17.719 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:17.719 08:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:17.977 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:18.235 [2024-07-12 08:43:53.279102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:18.235 [2024-07-12 08:43:53.281274] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:18.235 [2024-07-12 08:43:53.281361] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:18.235 [2024-07-12 08:43:53.281457] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:18.235 [2024-07-12 08:43:53.281494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.235 [2024-07-12 08:43:53.281505] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:17:18.235 request: 00:17:18.235 { 00:17:18.235 "name": "raid_bdev1", 00:17:18.235 "raid_level": "concat", 00:17:18.235 "base_bdevs": [ 00:17:18.235 "malloc1", 00:17:18.235 "malloc2" 00:17:18.235 ], 00:17:18.235 "strip_size_kb": 64, 00:17:18.235 "superblock": false, 00:17:18.235 "method": "bdev_raid_create", 00:17:18.235 "req_id": 1 00:17:18.235 } 00:17:18.235 Got JSON-RPC error response 00:17:18.235 response: 00:17:18.235 { 00:17:18.235 "code": -17, 00:17:18.235 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:18.235 } 00:17:18.235 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:18.235 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:18.235 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:18.235 08:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:18.235 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.235 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:18.492 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:18.492 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:18.492 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.750 [2024-07-12 08:43:53.791142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.750 [2024-07-12 08:43:53.791247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.750 [2024-07-12 08:43:53.791284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:18.750 [2024-07-12 08:43:53.791313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.750 [2024-07-12 08:43:53.793851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.750 [2024-07-12 08:43:53.793919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.750 [2024-07-12 08:43:53.794048] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:18.750 [2024-07-12 08:43:53.794105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.750 pt1 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.750 08:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.009 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.009 "name": "raid_bdev1", 00:17:19.009 "uuid": "c078ca7d-f2cd-4db7-9866-488dce5304ed", 00:17:19.009 "strip_size_kb": 64, 00:17:19.009 "state": "configuring", 00:17:19.009 "raid_level": "concat", 00:17:19.009 "superblock": true, 00:17:19.009 "num_base_bdevs": 2, 00:17:19.009 "num_base_bdevs_discovered": 1, 00:17:19.009 "num_base_bdevs_operational": 2, 00:17:19.009 "base_bdevs_list": [ 00:17:19.009 { 00:17:19.009 "name": "pt1", 00:17:19.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.009 "is_configured": true, 00:17:19.009 "data_offset": 2048, 00:17:19.009 "data_size": 63488 00:17:19.009 }, 00:17:19.009 { 00:17:19.009 "name": null, 00:17:19.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.009 "is_configured": false, 00:17:19.009 "data_offset": 2048, 00:17:19.009 "data_size": 63488 00:17:19.009 } 00:17:19.009 ] 00:17:19.009 }' 00:17:19.009 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.009 08:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.576 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:19.576 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:19.576 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:19.576 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.834 [2024-07-12 08:43:54.959388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.834 [2024-07-12 08:43:54.959507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.834 [2024-07-12 08:43:54.959549] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:19.834 [2024-07-12 08:43:54.959578] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.834 [2024-07-12 08:43:54.960134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.834 [2024-07-12 08:43:54.960196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.834 [2024-07-12 08:43:54.960324] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:19.834 [2024-07-12 08:43:54.960355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.834 [2024-07-12 08:43:54.960491] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:17:19.834 [2024-07-12 08:43:54.960506] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:19.834 [2024-07-12 08:43:54.960618] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:19.834 [2024-07-12 08:43:54.960981] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:17:19.834 [2024-07-12 08:43:54.961005] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:17:19.834 [2024-07-12 08:43:54.961147] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.834 pt2 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.834 08:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.092 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.092 "name": "raid_bdev1", 00:17:20.092 "uuid": "c078ca7d-f2cd-4db7-9866-488dce5304ed", 00:17:20.092 "strip_size_kb": 64, 00:17:20.092 "state": "online", 00:17:20.092 "raid_level": "concat", 00:17:20.092 "superblock": true, 00:17:20.092 "num_base_bdevs": 2, 00:17:20.092 "num_base_bdevs_discovered": 2, 00:17:20.092 "num_base_bdevs_operational": 2, 00:17:20.092 "base_bdevs_list": [ 00:17:20.092 { 00:17:20.092 "name": "pt1", 00:17:20.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.092 "is_configured": true, 00:17:20.092 "data_offset": 2048, 00:17:20.092 "data_size": 63488 00:17:20.092 }, 00:17:20.092 { 00:17:20.092 "name": "pt2", 00:17:20.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.092 "is_configured": true, 00:17:20.092 "data_offset": 2048, 00:17:20.092 "data_size": 63488 00:17:20.092 } 00:17:20.092 ] 00:17:20.092 }' 00:17:20.092 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.092 08:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:21.027 08:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:21.027 [2024-07-12 08:43:56.196886] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.027 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:21.027 "name": "raid_bdev1", 00:17:21.027 "aliases": [ 00:17:21.027 "c078ca7d-f2cd-4db7-9866-488dce5304ed" 00:17:21.027 ], 00:17:21.027 "product_name": "Raid Volume", 00:17:21.027 "block_size": 512, 00:17:21.027 "num_blocks": 126976, 00:17:21.027 "uuid": "c078ca7d-f2cd-4db7-9866-488dce5304ed", 00:17:21.027 "assigned_rate_limits": { 00:17:21.027 "rw_ios_per_sec": 0, 00:17:21.027 "rw_mbytes_per_sec": 0, 00:17:21.027 "r_mbytes_per_sec": 0, 00:17:21.027 "w_mbytes_per_sec": 0 00:17:21.027 }, 00:17:21.027 "claimed": false, 00:17:21.027 "zoned": false, 00:17:21.027 "supported_io_types": { 00:17:21.027 "read": true, 00:17:21.027 "write": true, 00:17:21.027 "unmap": true, 00:17:21.027 "flush": true, 00:17:21.027 "reset": true, 00:17:21.027 "nvme_admin": false, 00:17:21.027 "nvme_io": false, 00:17:21.027 "nvme_io_md": false, 00:17:21.027 "write_zeroes": true, 00:17:21.027 "zcopy": false, 00:17:21.027 "get_zone_info": false, 00:17:21.027 "zone_management": false, 00:17:21.027 "zone_append": false, 00:17:21.027 "compare": false, 00:17:21.027 "compare_and_write": false, 00:17:21.027 "abort": false, 00:17:21.027 "seek_hole": false, 00:17:21.027 "seek_data": false, 00:17:21.027 "copy": false, 00:17:21.027 "nvme_iov_md": false 00:17:21.027 }, 00:17:21.027 "memory_domains": [ 00:17:21.027 { 00:17:21.027 "dma_device_id": "system", 00:17:21.027 "dma_device_type": 1 00:17:21.027 }, 00:17:21.027 { 00:17:21.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.027 "dma_device_type": 2 00:17:21.027 }, 00:17:21.027 { 00:17:21.027 "dma_device_id": "system", 00:17:21.027 "dma_device_type": 1 00:17:21.027 }, 00:17:21.027 { 00:17:21.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.027 "dma_device_type": 2 00:17:21.027 } 00:17:21.027 ], 00:17:21.027 "driver_specific": { 00:17:21.027 "raid": { 00:17:21.027 "uuid": "c078ca7d-f2cd-4db7-9866-488dce5304ed", 00:17:21.027 "strip_size_kb": 64, 00:17:21.027 "state": "online", 00:17:21.027 "raid_level": "concat", 00:17:21.027 "superblock": true, 00:17:21.027 "num_base_bdevs": 2, 00:17:21.027 "num_base_bdevs_discovered": 2, 00:17:21.027 "num_base_bdevs_operational": 2, 00:17:21.027 "base_bdevs_list": [ 00:17:21.027 { 00:17:21.027 "name": "pt1", 00:17:21.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.027 "is_configured": true, 00:17:21.027 "data_offset": 2048, 00:17:21.027 "data_size": 63488 00:17:21.027 }, 00:17:21.027 { 00:17:21.027 "name": "pt2", 00:17:21.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.027 "is_configured": true, 00:17:21.027 "data_offset": 2048, 00:17:21.027 "data_size": 63488 00:17:21.027 } 00:17:21.027 ] 00:17:21.027 } 00:17:21.027 } 00:17:21.027 }' 00:17:21.028 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.346 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:21.346 pt2' 00:17:21.346 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.346 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:21.346 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.606 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.606 "name": "pt1", 00:17:21.606 "aliases": [ 00:17:21.606 "00000000-0000-0000-0000-000000000001" 00:17:21.606 ], 00:17:21.606 "product_name": "passthru", 00:17:21.606 "block_size": 512, 00:17:21.606 "num_blocks": 65536, 00:17:21.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.606 "assigned_rate_limits": { 00:17:21.606 "rw_ios_per_sec": 0, 00:17:21.606 "rw_mbytes_per_sec": 0, 00:17:21.606 "r_mbytes_per_sec": 0, 00:17:21.606 "w_mbytes_per_sec": 0 00:17:21.606 }, 00:17:21.606 "claimed": true, 00:17:21.606 "claim_type": "exclusive_write", 00:17:21.606 "zoned": false, 00:17:21.606 "supported_io_types": { 00:17:21.606 "read": true, 00:17:21.606 "write": true, 00:17:21.606 "unmap": true, 00:17:21.606 "flush": true, 00:17:21.606 "reset": true, 00:17:21.606 "nvme_admin": false, 00:17:21.606 "nvme_io": false, 00:17:21.606 "nvme_io_md": false, 00:17:21.606 "write_zeroes": true, 00:17:21.606 "zcopy": true, 00:17:21.606 "get_zone_info": false, 00:17:21.606 "zone_management": false, 00:17:21.606 "zone_append": false, 00:17:21.606 "compare": false, 00:17:21.606 "compare_and_write": false, 00:17:21.606 "abort": true, 00:17:21.606 "seek_hole": false, 00:17:21.606 "seek_data": false, 00:17:21.606 "copy": true, 00:17:21.606 "nvme_iov_md": false 00:17:21.606 }, 00:17:21.606 "memory_domains": [ 00:17:21.606 { 00:17:21.606 "dma_device_id": "system", 00:17:21.606 "dma_device_type": 1 00:17:21.606 }, 00:17:21.606 { 00:17:21.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.606 "dma_device_type": 2 00:17:21.606 } 00:17:21.606 ], 00:17:21.606 "driver_specific": { 00:17:21.606 "passthru": { 00:17:21.606 "name": "pt1", 00:17:21.606 "base_bdev_name": "malloc1" 00:17:21.606 } 00:17:21.606 } 00:17:21.606 }' 00:17:21.607 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.607 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.607 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.607 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.607 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.607 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.607 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.865 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.865 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.865 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.865 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.865 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.865 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.865 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:21.866 08:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.124 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.124 "name": "pt2", 00:17:22.124 "aliases": [ 00:17:22.124 "00000000-0000-0000-0000-000000000002" 00:17:22.124 ], 00:17:22.124 "product_name": "passthru", 00:17:22.124 "block_size": 512, 00:17:22.124 "num_blocks": 65536, 00:17:22.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.124 "assigned_rate_limits": { 00:17:22.124 "rw_ios_per_sec": 0, 00:17:22.124 "rw_mbytes_per_sec": 0, 00:17:22.124 "r_mbytes_per_sec": 0, 00:17:22.124 "w_mbytes_per_sec": 0 00:17:22.124 }, 00:17:22.124 "claimed": true, 00:17:22.124 "claim_type": "exclusive_write", 00:17:22.124 "zoned": false, 00:17:22.124 "supported_io_types": { 00:17:22.124 "read": true, 00:17:22.124 "write": true, 00:17:22.124 "unmap": true, 00:17:22.124 "flush": true, 00:17:22.124 "reset": true, 00:17:22.124 "nvme_admin": false, 00:17:22.124 "nvme_io": false, 00:17:22.124 "nvme_io_md": false, 00:17:22.124 "write_zeroes": true, 00:17:22.124 "zcopy": true, 00:17:22.124 "get_zone_info": false, 00:17:22.124 "zone_management": false, 00:17:22.124 "zone_append": false, 00:17:22.124 "compare": false, 00:17:22.124 "compare_and_write": false, 00:17:22.124 "abort": true, 00:17:22.124 "seek_hole": false, 00:17:22.124 "seek_data": false, 00:17:22.124 "copy": true, 00:17:22.124 "nvme_iov_md": false 00:17:22.124 }, 00:17:22.124 "memory_domains": [ 00:17:22.124 { 00:17:22.124 "dma_device_id": "system", 00:17:22.124 "dma_device_type": 1 00:17:22.124 }, 00:17:22.124 { 00:17:22.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.124 "dma_device_type": 2 00:17:22.124 } 00:17:22.124 ], 00:17:22.124 "driver_specific": { 00:17:22.124 "passthru": { 00:17:22.124 "name": "pt2", 00:17:22.124 "base_bdev_name": "malloc2" 00:17:22.124 } 00:17:22.124 } 00:17:22.124 }' 00:17:22.124 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.382 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.382 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.382 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.382 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.382 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.382 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.640 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.640 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.640 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.640 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.640 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.640 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.640 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:22.898 [2024-07-12 08:43:58.049339] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.898 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c078ca7d-f2cd-4db7-9866-488dce5304ed '!=' c078ca7d-f2cd-4db7-9866-488dce5304ed ']' 00:17:22.898 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:22.898 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:22.898 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 123296 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 123296 ']' 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 123296 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123296 00:17:22.899 killing process with pid 123296 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123296' 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 123296 00:17:22.899 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 123296 00:17:22.899 [2024-07-12 08:43:58.084754] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.899 [2024-07-12 08:43:58.084890] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.899 [2024-07-12 08:43:58.084967] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.899 [2024-07-12 08:43:58.084996] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:17:23.157 [2024-07-12 08:43:58.300906] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.529 ************************************ 00:17:24.529 END TEST raid_superblock_test 00:17:24.529 ************************************ 00:17:24.529 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:24.529 00:17:24.529 real 0m12.982s 00:17:24.529 user 0m23.230s 00:17:24.529 sys 0m1.439s 00:17:24.529 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.529 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 08:43:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:24.529 08:43:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:17:24.529 08:43:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:24.529 08:43:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.529 08:43:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 ************************************ 00:17:24.529 START TEST raid_read_error_test 00:17:24.529 ************************************ 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.nORFF6CUYJ 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=123697 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 123697 /var/tmp/spdk-raid.sock 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 123697 ']' 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.529 08:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.529 [2024-07-12 08:43:59.516505] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:17:24.529 [2024-07-12 08:43:59.516682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123697 ] 00:17:24.529 [2024-07-12 08:43:59.676429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.787 [2024-07-12 08:43:59.897971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.045 [2024-07-12 08:44:00.096771] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.302 08:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.302 08:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:25.302 08:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:25.302 08:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:25.578 BaseBdev1_malloc 00:17:25.578 08:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:25.836 true 00:17:25.836 08:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:26.096 [2024-07-12 08:44:01.194721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:26.096 [2024-07-12 08:44:01.194870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.096 [2024-07-12 08:44:01.194921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:26.096 [2024-07-12 08:44:01.194946] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.096 [2024-07-12 08:44:01.197757] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.096 [2024-07-12 08:44:01.197827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:26.096 BaseBdev1 00:17:26.096 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:26.096 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:26.353 BaseBdev2_malloc 00:17:26.353 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:26.611 true 00:17:26.611 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:26.868 [2024-07-12 08:44:02.006313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:26.868 [2024-07-12 08:44:02.006442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.868 [2024-07-12 08:44:02.006495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:26.868 [2024-07-12 08:44:02.006521] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.868 [2024-07-12 08:44:02.009175] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.868 [2024-07-12 08:44:02.009242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:26.868 BaseBdev2 00:17:26.868 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:27.126 [2024-07-12 08:44:02.250405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.126 [2024-07-12 08:44:02.252709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.126 [2024-07-12 08:44:02.252981] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:27.126 [2024-07-12 08:44:02.253007] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:27.126 [2024-07-12 08:44:02.253158] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:27.126 [2024-07-12 08:44:02.253586] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:27.126 [2024-07-12 08:44:02.253613] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:27.126 [2024-07-12 08:44:02.253787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.126 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.384 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.384 "name": "raid_bdev1", 00:17:27.384 "uuid": "6c7dd51b-9016-48c0-a99f-f876bfc1a252", 00:17:27.384 "strip_size_kb": 64, 00:17:27.384 "state": "online", 00:17:27.384 "raid_level": "concat", 00:17:27.384 "superblock": true, 00:17:27.384 "num_base_bdevs": 2, 00:17:27.384 "num_base_bdevs_discovered": 2, 00:17:27.384 "num_base_bdevs_operational": 2, 00:17:27.384 "base_bdevs_list": [ 00:17:27.384 { 00:17:27.384 "name": "BaseBdev1", 00:17:27.384 "uuid": "d2d65349-cd0b-51c1-9929-b6a61357a5a5", 00:17:27.384 "is_configured": true, 00:17:27.384 "data_offset": 2048, 00:17:27.384 "data_size": 63488 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "name": "BaseBdev2", 00:17:27.384 "uuid": "a48b7d6d-21e0-56be-b0f6-da8920a2802d", 00:17:27.384 "is_configured": true, 00:17:27.384 "data_offset": 2048, 00:17:27.384 "data_size": 63488 00:17:27.384 } 00:17:27.384 ] 00:17:27.384 }' 00:17:27.384 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.384 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.325 08:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:28.325 08:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:28.325 [2024-07-12 08:44:03.247877] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.258 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.516 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.774 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.774 "name": "raid_bdev1", 00:17:29.774 "uuid": "6c7dd51b-9016-48c0-a99f-f876bfc1a252", 00:17:29.774 "strip_size_kb": 64, 00:17:29.774 "state": "online", 00:17:29.774 "raid_level": "concat", 00:17:29.774 "superblock": true, 00:17:29.774 "num_base_bdevs": 2, 00:17:29.774 "num_base_bdevs_discovered": 2, 00:17:29.774 "num_base_bdevs_operational": 2, 00:17:29.774 "base_bdevs_list": [ 00:17:29.774 { 00:17:29.774 "name": "BaseBdev1", 00:17:29.774 "uuid": "d2d65349-cd0b-51c1-9929-b6a61357a5a5", 00:17:29.774 "is_configured": true, 00:17:29.774 "data_offset": 2048, 00:17:29.774 "data_size": 63488 00:17:29.774 }, 00:17:29.774 { 00:17:29.774 "name": "BaseBdev2", 00:17:29.774 "uuid": "a48b7d6d-21e0-56be-b0f6-da8920a2802d", 00:17:29.774 "is_configured": true, 00:17:29.774 "data_offset": 2048, 00:17:29.774 "data_size": 63488 00:17:29.774 } 00:17:29.774 ] 00:17:29.774 }' 00:17:29.774 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.774 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.340 08:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:30.598 [2024-07-12 08:44:05.648856] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.598 [2024-07-12 08:44:05.648911] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.598 [2024-07-12 08:44:05.652006] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.598 [2024-07-12 08:44:05.652064] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.598 [2024-07-12 08:44:05.652105] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.598 [2024-07-12 08:44:05.652116] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:30.598 0 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 123697 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 123697 ']' 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 123697 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123697 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123697' 00:17:30.598 killing process with pid 123697 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 123697 00:17:30.598 08:44:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 123697 00:17:30.598 [2024-07-12 08:44:05.686338] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.856 [2024-07-12 08:44:05.796563] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.nORFF6CUYJ 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:32.230 ************************************ 00:17:32.230 END TEST raid_read_error_test 00:17:32.230 ************************************ 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:17:32.230 00:17:32.230 real 0m7.541s 00:17:32.230 user 0m11.559s 00:17:32.230 sys 0m0.733s 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.230 08:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.230 08:44:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:32.230 08:44:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:17:32.230 08:44:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:32.230 08:44:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.230 08:44:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.230 ************************************ 00:17:32.230 START TEST raid_write_error_test 00:17:32.230 ************************************ 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hts3tslaKA 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=123908 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 123908 /var/tmp/spdk-raid.sock 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 123908 ']' 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.230 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.230 [2024-07-12 08:44:07.121135] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:17:32.230 [2024-07-12 08:44:07.121347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123908 ] 00:17:32.230 [2024-07-12 08:44:07.286563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.488 [2024-07-12 08:44:07.500622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.745 [2024-07-12 08:44:07.698796] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.003 08:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:33.003 08:44:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:33.003 08:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:33.003 08:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:33.261 BaseBdev1_malloc 00:17:33.261 08:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:33.519 true 00:17:33.519 08:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:33.777 [2024-07-12 08:44:08.832167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:33.777 [2024-07-12 08:44:08.832524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.777 [2024-07-12 08:44:08.832738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.777 [2024-07-12 08:44:08.832873] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.777 [2024-07-12 08:44:08.835594] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.777 [2024-07-12 08:44:08.835765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:33.777 BaseBdev1 00:17:33.777 08:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:33.777 08:44:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:34.035 BaseBdev2_malloc 00:17:34.035 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:34.294 true 00:17:34.294 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:34.552 [2024-07-12 08:44:09.652767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:34.552 [2024-07-12 08:44:09.653099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.552 [2024-07-12 08:44:09.653263] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:34.552 [2024-07-12 08:44:09.653386] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.552 [2024-07-12 08:44:09.656111] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.552 [2024-07-12 08:44:09.656301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:34.552 BaseBdev2 00:17:34.552 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:34.811 [2024-07-12 08:44:09.892899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.811 [2024-07-12 08:44:09.895299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.811 [2024-07-12 08:44:09.895716] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:34.811 [2024-07-12 08:44:09.895847] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:34.812 [2024-07-12 08:44:09.896036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:34.812 [2024-07-12 08:44:09.896528] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:34.812 [2024-07-12 08:44:09.896659] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:34.812 [2024-07-12 08:44:09.896996] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.812 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.071 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.071 "name": "raid_bdev1", 00:17:35.071 "uuid": "dc0e6057-f154-480f-b563-af332981ba06", 00:17:35.071 "strip_size_kb": 64, 00:17:35.071 "state": "online", 00:17:35.071 "raid_level": "concat", 00:17:35.071 "superblock": true, 00:17:35.071 "num_base_bdevs": 2, 00:17:35.071 "num_base_bdevs_discovered": 2, 00:17:35.071 "num_base_bdevs_operational": 2, 00:17:35.071 "base_bdevs_list": [ 00:17:35.071 { 00:17:35.071 "name": "BaseBdev1", 00:17:35.071 "uuid": "ef0b81c7-033b-53de-adfb-3f13d802f69a", 00:17:35.071 "is_configured": true, 00:17:35.071 "data_offset": 2048, 00:17:35.071 "data_size": 63488 00:17:35.071 }, 00:17:35.071 { 00:17:35.071 "name": "BaseBdev2", 00:17:35.071 "uuid": "9beb4d7b-75ea-539f-9134-dc7972d142da", 00:17:35.071 "is_configured": true, 00:17:35.071 "data_offset": 2048, 00:17:35.071 "data_size": 63488 00:17:35.071 } 00:17:35.071 ] 00:17:35.071 }' 00:17:35.071 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.071 08:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.637 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:35.637 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:35.896 [2024-07-12 08:44:10.890511] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:36.831 08:44:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.090 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.350 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.350 "name": "raid_bdev1", 00:17:37.350 "uuid": "dc0e6057-f154-480f-b563-af332981ba06", 00:17:37.350 "strip_size_kb": 64, 00:17:37.350 "state": "online", 00:17:37.350 "raid_level": "concat", 00:17:37.350 "superblock": true, 00:17:37.350 "num_base_bdevs": 2, 00:17:37.350 "num_base_bdevs_discovered": 2, 00:17:37.350 "num_base_bdevs_operational": 2, 00:17:37.350 "base_bdevs_list": [ 00:17:37.350 { 00:17:37.350 "name": "BaseBdev1", 00:17:37.350 "uuid": "ef0b81c7-033b-53de-adfb-3f13d802f69a", 00:17:37.350 "is_configured": true, 00:17:37.350 "data_offset": 2048, 00:17:37.350 "data_size": 63488 00:17:37.350 }, 00:17:37.350 { 00:17:37.350 "name": "BaseBdev2", 00:17:37.350 "uuid": "9beb4d7b-75ea-539f-9134-dc7972d142da", 00:17:37.350 "is_configured": true, 00:17:37.350 "data_offset": 2048, 00:17:37.350 "data_size": 63488 00:17:37.350 } 00:17:37.350 ] 00:17:37.350 }' 00:17:37.350 08:44:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.350 08:44:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.986 08:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:38.244 [2024-07-12 08:44:13.306941] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.244 [2024-07-12 08:44:13.307183] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.244 [2024-07-12 08:44:13.310408] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.244 [2024-07-12 08:44:13.310582] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.244 [2024-07-12 08:44:13.310659] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.244 [2024-07-12 08:44:13.310897] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:38.244 0 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 123908 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 123908 ']' 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 123908 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123908 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123908' 00:17:38.244 killing process with pid 123908 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 123908 00:17:38.244 08:44:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 123908 00:17:38.244 [2024-07-12 08:44:13.346414] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.502 [2024-07-12 08:44:13.456870] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hts3tslaKA 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:39.876 ************************************ 00:17:39.876 END TEST raid_write_error_test 00:17:39.876 ************************************ 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:17:39.876 00:17:39.876 real 0m7.598s 00:17:39.876 user 0m11.640s 00:17:39.876 sys 0m0.741s 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.876 08:44:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.876 08:44:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:39.876 08:44:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:39.876 08:44:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:17:39.876 08:44:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:39.876 08:44:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.876 08:44:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:39.876 ************************************ 00:17:39.876 START TEST raid_state_function_test 00:17:39.876 ************************************ 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=124124 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124124' 00:17:39.876 Process raid pid: 124124 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 124124 /var/tmp/spdk-raid.sock 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 124124 ']' 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:39.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.876 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.876 [2024-07-12 08:44:14.756930] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:17:39.876 [2024-07-12 08:44:14.757310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.876 [2024-07-12 08:44:14.919331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.135 [2024-07-12 08:44:15.138558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.394 [2024-07-12 08:44:15.340793] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.652 08:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.652 08:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:40.652 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:40.911 [2024-07-12 08:44:15.944075] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.911 [2024-07-12 08:44:15.944378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.911 [2024-07-12 08:44:15.944503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.911 [2024-07-12 08:44:15.944626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.911 08:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.169 08:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.169 "name": "Existed_Raid", 00:17:41.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.169 "strip_size_kb": 0, 00:17:41.169 "state": "configuring", 00:17:41.169 "raid_level": "raid1", 00:17:41.169 "superblock": false, 00:17:41.169 "num_base_bdevs": 2, 00:17:41.169 "num_base_bdevs_discovered": 0, 00:17:41.169 "num_base_bdevs_operational": 2, 00:17:41.169 "base_bdevs_list": [ 00:17:41.169 { 00:17:41.169 "name": "BaseBdev1", 00:17:41.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.169 "is_configured": false, 00:17:41.169 "data_offset": 0, 00:17:41.169 "data_size": 0 00:17:41.169 }, 00:17:41.169 { 00:17:41.169 "name": "BaseBdev2", 00:17:41.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.169 "is_configured": false, 00:17:41.169 "data_offset": 0, 00:17:41.169 "data_size": 0 00:17:41.169 } 00:17:41.169 ] 00:17:41.169 }' 00:17:41.169 08:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.169 08:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.131 08:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.131 [2024-07-12 08:44:17.196215] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.131 [2024-07-12 08:44:17.196470] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:42.131 08:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:42.389 [2024-07-12 08:44:17.444280] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.389 [2024-07-12 08:44:17.444545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.389 [2024-07-12 08:44:17.444673] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.389 [2024-07-12 08:44:17.444738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.389 08:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.646 [2024-07-12 08:44:17.715851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.646 BaseBdev1 00:17:42.646 08:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:42.646 08:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:42.646 08:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.646 08:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:42.646 08:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.646 08:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.646 08:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.904 08:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.162 [ 00:17:43.162 { 00:17:43.162 "name": "BaseBdev1", 00:17:43.162 "aliases": [ 00:17:43.162 "f084fe89-0353-41d1-b3d2-bb61bbf1d173" 00:17:43.162 ], 00:17:43.162 "product_name": "Malloc disk", 00:17:43.162 "block_size": 512, 00:17:43.162 "num_blocks": 65536, 00:17:43.162 "uuid": "f084fe89-0353-41d1-b3d2-bb61bbf1d173", 00:17:43.162 "assigned_rate_limits": { 00:17:43.162 "rw_ios_per_sec": 0, 00:17:43.162 "rw_mbytes_per_sec": 0, 00:17:43.162 "r_mbytes_per_sec": 0, 00:17:43.162 "w_mbytes_per_sec": 0 00:17:43.162 }, 00:17:43.162 "claimed": true, 00:17:43.162 "claim_type": "exclusive_write", 00:17:43.162 "zoned": false, 00:17:43.162 "supported_io_types": { 00:17:43.162 "read": true, 00:17:43.162 "write": true, 00:17:43.162 "unmap": true, 00:17:43.162 "flush": true, 00:17:43.162 "reset": true, 00:17:43.162 "nvme_admin": false, 00:17:43.162 "nvme_io": false, 00:17:43.162 "nvme_io_md": false, 00:17:43.162 "write_zeroes": true, 00:17:43.162 "zcopy": true, 00:17:43.162 "get_zone_info": false, 00:17:43.162 "zone_management": false, 00:17:43.162 "zone_append": false, 00:17:43.162 "compare": false, 00:17:43.162 "compare_and_write": false, 00:17:43.162 "abort": true, 00:17:43.162 "seek_hole": false, 00:17:43.162 "seek_data": false, 00:17:43.162 "copy": true, 00:17:43.162 "nvme_iov_md": false 00:17:43.162 }, 00:17:43.162 "memory_domains": [ 00:17:43.162 { 00:17:43.162 "dma_device_id": "system", 00:17:43.162 "dma_device_type": 1 00:17:43.162 }, 00:17:43.162 { 00:17:43.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.162 "dma_device_type": 2 00:17:43.162 } 00:17:43.162 ], 00:17:43.162 "driver_specific": {} 00:17:43.162 } 00:17:43.162 ] 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.162 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.421 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.421 "name": "Existed_Raid", 00:17:43.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.421 "strip_size_kb": 0, 00:17:43.421 "state": "configuring", 00:17:43.421 "raid_level": "raid1", 00:17:43.421 "superblock": false, 00:17:43.421 "num_base_bdevs": 2, 00:17:43.421 "num_base_bdevs_discovered": 1, 00:17:43.421 "num_base_bdevs_operational": 2, 00:17:43.421 "base_bdevs_list": [ 00:17:43.421 { 00:17:43.421 "name": "BaseBdev1", 00:17:43.421 "uuid": "f084fe89-0353-41d1-b3d2-bb61bbf1d173", 00:17:43.421 "is_configured": true, 00:17:43.421 "data_offset": 0, 00:17:43.421 "data_size": 65536 00:17:43.421 }, 00:17:43.421 { 00:17:43.421 "name": "BaseBdev2", 00:17:43.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.421 "is_configured": false, 00:17:43.421 "data_offset": 0, 00:17:43.421 "data_size": 0 00:17:43.421 } 00:17:43.421 ] 00:17:43.421 }' 00:17:43.421 08:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.421 08:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.354 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:44.354 [2024-07-12 08:44:19.480347] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.354 [2024-07-12 08:44:19.480595] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:17:44.354 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:44.612 [2024-07-12 08:44:19.708419] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.612 [2024-07-12 08:44:19.710748] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.612 [2024-07-12 08:44:19.710941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.612 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.871 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.871 "name": "Existed_Raid", 00:17:44.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.871 "strip_size_kb": 0, 00:17:44.871 "state": "configuring", 00:17:44.871 "raid_level": "raid1", 00:17:44.871 "superblock": false, 00:17:44.871 "num_base_bdevs": 2, 00:17:44.871 "num_base_bdevs_discovered": 1, 00:17:44.871 "num_base_bdevs_operational": 2, 00:17:44.871 "base_bdevs_list": [ 00:17:44.871 { 00:17:44.871 "name": "BaseBdev1", 00:17:44.871 "uuid": "f084fe89-0353-41d1-b3d2-bb61bbf1d173", 00:17:44.871 "is_configured": true, 00:17:44.871 "data_offset": 0, 00:17:44.871 "data_size": 65536 00:17:44.871 }, 00:17:44.871 { 00:17:44.871 "name": "BaseBdev2", 00:17:44.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.871 "is_configured": false, 00:17:44.871 "data_offset": 0, 00:17:44.871 "data_size": 0 00:17:44.871 } 00:17:44.871 ] 00:17:44.871 }' 00:17:44.871 08:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.871 08:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:45.805 [2024-07-12 08:44:20.963576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.805 [2024-07-12 08:44:20.963893] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:45.805 [2024-07-12 08:44:20.963940] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:45.805 [2024-07-12 08:44:20.964189] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:45.805 [2024-07-12 08:44:20.964741] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:45.805 [2024-07-12 08:44:20.964863] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:45.805 [2024-07-12 08:44:20.965270] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.805 BaseBdev2 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.805 08:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.063 08:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.320 [ 00:17:46.320 { 00:17:46.320 "name": "BaseBdev2", 00:17:46.320 "aliases": [ 00:17:46.320 "e5a66591-3329-4488-b20d-fc2f04948424" 00:17:46.320 ], 00:17:46.320 "product_name": "Malloc disk", 00:17:46.320 "block_size": 512, 00:17:46.320 "num_blocks": 65536, 00:17:46.320 "uuid": "e5a66591-3329-4488-b20d-fc2f04948424", 00:17:46.320 "assigned_rate_limits": { 00:17:46.320 "rw_ios_per_sec": 0, 00:17:46.320 "rw_mbytes_per_sec": 0, 00:17:46.320 "r_mbytes_per_sec": 0, 00:17:46.320 "w_mbytes_per_sec": 0 00:17:46.320 }, 00:17:46.320 "claimed": true, 00:17:46.320 "claim_type": "exclusive_write", 00:17:46.320 "zoned": false, 00:17:46.320 "supported_io_types": { 00:17:46.320 "read": true, 00:17:46.320 "write": true, 00:17:46.320 "unmap": true, 00:17:46.320 "flush": true, 00:17:46.320 "reset": true, 00:17:46.320 "nvme_admin": false, 00:17:46.320 "nvme_io": false, 00:17:46.320 "nvme_io_md": false, 00:17:46.320 "write_zeroes": true, 00:17:46.320 "zcopy": true, 00:17:46.320 "get_zone_info": false, 00:17:46.320 "zone_management": false, 00:17:46.320 "zone_append": false, 00:17:46.320 "compare": false, 00:17:46.320 "compare_and_write": false, 00:17:46.320 "abort": true, 00:17:46.320 "seek_hole": false, 00:17:46.320 "seek_data": false, 00:17:46.320 "copy": true, 00:17:46.320 "nvme_iov_md": false 00:17:46.320 }, 00:17:46.320 "memory_domains": [ 00:17:46.320 { 00:17:46.320 "dma_device_id": "system", 00:17:46.320 "dma_device_type": 1 00:17:46.320 }, 00:17:46.320 { 00:17:46.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.320 "dma_device_type": 2 00:17:46.320 } 00:17:46.320 ], 00:17:46.320 "driver_specific": {} 00:17:46.320 } 00:17:46.320 ] 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.320 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.578 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.578 "name": "Existed_Raid", 00:17:46.578 "uuid": "8b4b2a46-fada-4ce3-aeb4-311a5d9a7177", 00:17:46.578 "strip_size_kb": 0, 00:17:46.578 "state": "online", 00:17:46.578 "raid_level": "raid1", 00:17:46.578 "superblock": false, 00:17:46.578 "num_base_bdevs": 2, 00:17:46.578 "num_base_bdevs_discovered": 2, 00:17:46.578 "num_base_bdevs_operational": 2, 00:17:46.578 "base_bdevs_list": [ 00:17:46.578 { 00:17:46.578 "name": "BaseBdev1", 00:17:46.578 "uuid": "f084fe89-0353-41d1-b3d2-bb61bbf1d173", 00:17:46.578 "is_configured": true, 00:17:46.578 "data_offset": 0, 00:17:46.578 "data_size": 65536 00:17:46.578 }, 00:17:46.578 { 00:17:46.578 "name": "BaseBdev2", 00:17:46.578 "uuid": "e5a66591-3329-4488-b20d-fc2f04948424", 00:17:46.578 "is_configured": true, 00:17:46.578 "data_offset": 0, 00:17:46.578 "data_size": 65536 00:17:46.578 } 00:17:46.578 ] 00:17:46.578 }' 00:17:46.578 08:44:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.578 08:44:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:47.511 [2024-07-12 08:44:22.676326] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.511 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:47.511 "name": "Existed_Raid", 00:17:47.511 "aliases": [ 00:17:47.511 "8b4b2a46-fada-4ce3-aeb4-311a5d9a7177" 00:17:47.511 ], 00:17:47.511 "product_name": "Raid Volume", 00:17:47.511 "block_size": 512, 00:17:47.511 "num_blocks": 65536, 00:17:47.511 "uuid": "8b4b2a46-fada-4ce3-aeb4-311a5d9a7177", 00:17:47.511 "assigned_rate_limits": { 00:17:47.511 "rw_ios_per_sec": 0, 00:17:47.511 "rw_mbytes_per_sec": 0, 00:17:47.511 "r_mbytes_per_sec": 0, 00:17:47.511 "w_mbytes_per_sec": 0 00:17:47.511 }, 00:17:47.511 "claimed": false, 00:17:47.511 "zoned": false, 00:17:47.511 "supported_io_types": { 00:17:47.511 "read": true, 00:17:47.511 "write": true, 00:17:47.511 "unmap": false, 00:17:47.511 "flush": false, 00:17:47.511 "reset": true, 00:17:47.511 "nvme_admin": false, 00:17:47.511 "nvme_io": false, 00:17:47.511 "nvme_io_md": false, 00:17:47.511 "write_zeroes": true, 00:17:47.511 "zcopy": false, 00:17:47.512 "get_zone_info": false, 00:17:47.512 "zone_management": false, 00:17:47.512 "zone_append": false, 00:17:47.512 "compare": false, 00:17:47.512 "compare_and_write": false, 00:17:47.512 "abort": false, 00:17:47.512 "seek_hole": false, 00:17:47.512 "seek_data": false, 00:17:47.512 "copy": false, 00:17:47.512 "nvme_iov_md": false 00:17:47.512 }, 00:17:47.512 "memory_domains": [ 00:17:47.512 { 00:17:47.512 "dma_device_id": "system", 00:17:47.512 "dma_device_type": 1 00:17:47.512 }, 00:17:47.512 { 00:17:47.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.512 "dma_device_type": 2 00:17:47.512 }, 00:17:47.512 { 00:17:47.512 "dma_device_id": "system", 00:17:47.512 "dma_device_type": 1 00:17:47.512 }, 00:17:47.512 { 00:17:47.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.512 "dma_device_type": 2 00:17:47.512 } 00:17:47.512 ], 00:17:47.512 "driver_specific": { 00:17:47.512 "raid": { 00:17:47.512 "uuid": "8b4b2a46-fada-4ce3-aeb4-311a5d9a7177", 00:17:47.512 "strip_size_kb": 0, 00:17:47.512 "state": "online", 00:17:47.512 "raid_level": "raid1", 00:17:47.512 "superblock": false, 00:17:47.512 "num_base_bdevs": 2, 00:17:47.512 "num_base_bdevs_discovered": 2, 00:17:47.512 "num_base_bdevs_operational": 2, 00:17:47.512 "base_bdevs_list": [ 00:17:47.512 { 00:17:47.512 "name": "BaseBdev1", 00:17:47.512 "uuid": "f084fe89-0353-41d1-b3d2-bb61bbf1d173", 00:17:47.512 "is_configured": true, 00:17:47.512 "data_offset": 0, 00:17:47.512 "data_size": 65536 00:17:47.512 }, 00:17:47.512 { 00:17:47.512 "name": "BaseBdev2", 00:17:47.512 "uuid": "e5a66591-3329-4488-b20d-fc2f04948424", 00:17:47.512 "is_configured": true, 00:17:47.512 "data_offset": 0, 00:17:47.512 "data_size": 65536 00:17:47.512 } 00:17:47.512 ] 00:17:47.512 } 00:17:47.512 } 00:17:47.512 }' 00:17:47.512 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.770 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:47.770 BaseBdev2' 00:17:47.770 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:47.770 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:47.770 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.026 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.026 "name": "BaseBdev1", 00:17:48.026 "aliases": [ 00:17:48.026 "f084fe89-0353-41d1-b3d2-bb61bbf1d173" 00:17:48.026 ], 00:17:48.026 "product_name": "Malloc disk", 00:17:48.026 "block_size": 512, 00:17:48.026 "num_blocks": 65536, 00:17:48.026 "uuid": "f084fe89-0353-41d1-b3d2-bb61bbf1d173", 00:17:48.026 "assigned_rate_limits": { 00:17:48.026 "rw_ios_per_sec": 0, 00:17:48.026 "rw_mbytes_per_sec": 0, 00:17:48.026 "r_mbytes_per_sec": 0, 00:17:48.026 "w_mbytes_per_sec": 0 00:17:48.026 }, 00:17:48.026 "claimed": true, 00:17:48.026 "claim_type": "exclusive_write", 00:17:48.026 "zoned": false, 00:17:48.026 "supported_io_types": { 00:17:48.026 "read": true, 00:17:48.026 "write": true, 00:17:48.026 "unmap": true, 00:17:48.026 "flush": true, 00:17:48.026 "reset": true, 00:17:48.026 "nvme_admin": false, 00:17:48.026 "nvme_io": false, 00:17:48.026 "nvme_io_md": false, 00:17:48.026 "write_zeroes": true, 00:17:48.026 "zcopy": true, 00:17:48.026 "get_zone_info": false, 00:17:48.026 "zone_management": false, 00:17:48.026 "zone_append": false, 00:17:48.026 "compare": false, 00:17:48.026 "compare_and_write": false, 00:17:48.026 "abort": true, 00:17:48.026 "seek_hole": false, 00:17:48.026 "seek_data": false, 00:17:48.026 "copy": true, 00:17:48.026 "nvme_iov_md": false 00:17:48.026 }, 00:17:48.026 "memory_domains": [ 00:17:48.026 { 00:17:48.026 "dma_device_id": "system", 00:17:48.026 "dma_device_type": 1 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.026 "dma_device_type": 2 00:17:48.026 } 00:17:48.026 ], 00:17:48.026 "driver_specific": {} 00:17:48.026 }' 00:17:48.026 08:44:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.026 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.026 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.026 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.026 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.026 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.026 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:48.283 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.540 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.540 "name": "BaseBdev2", 00:17:48.540 "aliases": [ 00:17:48.540 "e5a66591-3329-4488-b20d-fc2f04948424" 00:17:48.540 ], 00:17:48.540 "product_name": "Malloc disk", 00:17:48.540 "block_size": 512, 00:17:48.540 "num_blocks": 65536, 00:17:48.540 "uuid": "e5a66591-3329-4488-b20d-fc2f04948424", 00:17:48.540 "assigned_rate_limits": { 00:17:48.540 "rw_ios_per_sec": 0, 00:17:48.540 "rw_mbytes_per_sec": 0, 00:17:48.540 "r_mbytes_per_sec": 0, 00:17:48.540 "w_mbytes_per_sec": 0 00:17:48.540 }, 00:17:48.540 "claimed": true, 00:17:48.540 "claim_type": "exclusive_write", 00:17:48.540 "zoned": false, 00:17:48.540 "supported_io_types": { 00:17:48.540 "read": true, 00:17:48.540 "write": true, 00:17:48.540 "unmap": true, 00:17:48.540 "flush": true, 00:17:48.540 "reset": true, 00:17:48.540 "nvme_admin": false, 00:17:48.540 "nvme_io": false, 00:17:48.540 "nvme_io_md": false, 00:17:48.540 "write_zeroes": true, 00:17:48.540 "zcopy": true, 00:17:48.540 "get_zone_info": false, 00:17:48.540 "zone_management": false, 00:17:48.540 "zone_append": false, 00:17:48.540 "compare": false, 00:17:48.540 "compare_and_write": false, 00:17:48.540 "abort": true, 00:17:48.540 "seek_hole": false, 00:17:48.540 "seek_data": false, 00:17:48.540 "copy": true, 00:17:48.540 "nvme_iov_md": false 00:17:48.540 }, 00:17:48.540 "memory_domains": [ 00:17:48.540 { 00:17:48.540 "dma_device_id": "system", 00:17:48.540 "dma_device_type": 1 00:17:48.540 }, 00:17:48.540 { 00:17:48.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.540 "dma_device_type": 2 00:17:48.540 } 00:17:48.540 ], 00:17:48.540 "driver_specific": {} 00:17:48.540 }' 00:17:48.540 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.540 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.798 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.798 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.798 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.798 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.798 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.798 08:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.056 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.056 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.056 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.056 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.056 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:49.314 [2024-07-12 08:44:24.388504] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:49.314 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.315 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.315 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.315 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.315 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.315 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.881 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.881 "name": "Existed_Raid", 00:17:49.881 "uuid": "8b4b2a46-fada-4ce3-aeb4-311a5d9a7177", 00:17:49.881 "strip_size_kb": 0, 00:17:49.881 "state": "online", 00:17:49.881 "raid_level": "raid1", 00:17:49.881 "superblock": false, 00:17:49.881 "num_base_bdevs": 2, 00:17:49.881 "num_base_bdevs_discovered": 1, 00:17:49.881 "num_base_bdevs_operational": 1, 00:17:49.881 "base_bdevs_list": [ 00:17:49.881 { 00:17:49.881 "name": null, 00:17:49.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.881 "is_configured": false, 00:17:49.881 "data_offset": 0, 00:17:49.881 "data_size": 65536 00:17:49.881 }, 00:17:49.881 { 00:17:49.881 "name": "BaseBdev2", 00:17:49.881 "uuid": "e5a66591-3329-4488-b20d-fc2f04948424", 00:17:49.881 "is_configured": true, 00:17:49.881 "data_offset": 0, 00:17:49.881 "data_size": 65536 00:17:49.881 } 00:17:49.881 ] 00:17:49.881 }' 00:17:49.881 08:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.881 08:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.446 08:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:50.446 08:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:50.446 08:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.447 08:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:50.703 08:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:50.703 08:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.703 08:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:50.960 [2024-07-12 08:44:25.961910] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.960 [2024-07-12 08:44:25.962287] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.960 [2024-07-12 08:44:26.047057] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.960 [2024-07-12 08:44:26.047371] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.960 [2024-07-12 08:44:26.047479] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:50.960 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:50.960 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:50.960 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.960 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 124124 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 124124 ']' 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 124124 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124124 00:17:51.217 killing process with pid 124124 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124124' 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 124124 00:17:51.217 08:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 124124 00:17:51.217 [2024-07-12 08:44:26.345702] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.217 [2024-07-12 08:44:26.345823] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.632 ************************************ 00:17:52.632 END TEST raid_state_function_test 00:17:52.632 ************************************ 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:52.632 00:17:52.632 real 0m12.779s 00:17:52.632 user 0m22.787s 00:17:52.632 sys 0m1.394s 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.632 08:44:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:52.632 08:44:27 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:17:52.632 08:44:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:52.632 08:44:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.632 08:44:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:52.632 ************************************ 00:17:52.632 START TEST raid_state_function_test_sb 00:17:52.632 ************************************ 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=124544 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124544' 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:52.632 Process raid pid: 124544 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 124544 /var/tmp/spdk-raid.sock 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 124544 ']' 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:52.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.632 08:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.632 [2024-07-12 08:44:27.594005] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:17:52.632 [2024-07-12 08:44:27.594321] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.632 [2024-07-12 08:44:27.753543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.891 [2024-07-12 08:44:27.978667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.149 [2024-07-12 08:44:28.178961] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.406 08:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.406 08:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:53.407 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:53.971 [2024-07-12 08:44:28.861413] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.971 [2024-07-12 08:44:28.861790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.971 [2024-07-12 08:44:28.861906] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.971 [2024-07-12 08:44:28.861974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.971 08:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.971 08:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.971 "name": "Existed_Raid", 00:17:53.971 "uuid": "6615e43c-8dac-48de-94dd-d64951137cd9", 00:17:53.971 "strip_size_kb": 0, 00:17:53.971 "state": "configuring", 00:17:53.971 "raid_level": "raid1", 00:17:53.971 "superblock": true, 00:17:53.971 "num_base_bdevs": 2, 00:17:53.971 "num_base_bdevs_discovered": 0, 00:17:53.971 "num_base_bdevs_operational": 2, 00:17:53.971 "base_bdevs_list": [ 00:17:53.971 { 00:17:53.971 "name": "BaseBdev1", 00:17:53.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.971 "is_configured": false, 00:17:53.971 "data_offset": 0, 00:17:53.971 "data_size": 0 00:17:53.971 }, 00:17:53.971 { 00:17:53.971 "name": "BaseBdev2", 00:17:53.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.971 "is_configured": false, 00:17:53.971 "data_offset": 0, 00:17:53.971 "data_size": 0 00:17:53.971 } 00:17:53.971 ] 00:17:53.971 }' 00:17:53.971 08:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.971 08:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.903 08:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.903 [2024-07-12 08:44:30.089482] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.903 [2024-07-12 08:44:30.089764] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:55.161 08:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:55.161 [2024-07-12 08:44:30.317556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.161 [2024-07-12 08:44:30.317855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.161 [2024-07-12 08:44:30.317965] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.161 [2024-07-12 08:44:30.318034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.161 08:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.727 [2024-07-12 08:44:30.613063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.727 BaseBdev1 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.727 08:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.985 [ 00:17:55.985 { 00:17:55.985 "name": "BaseBdev1", 00:17:55.985 "aliases": [ 00:17:55.985 "84b489e0-419a-4af1-a69e-5d3c9d443bfc" 00:17:55.985 ], 00:17:55.985 "product_name": "Malloc disk", 00:17:55.985 "block_size": 512, 00:17:55.985 "num_blocks": 65536, 00:17:55.985 "uuid": "84b489e0-419a-4af1-a69e-5d3c9d443bfc", 00:17:55.985 "assigned_rate_limits": { 00:17:55.985 "rw_ios_per_sec": 0, 00:17:55.985 "rw_mbytes_per_sec": 0, 00:17:55.985 "r_mbytes_per_sec": 0, 00:17:55.985 "w_mbytes_per_sec": 0 00:17:55.985 }, 00:17:55.985 "claimed": true, 00:17:55.985 "claim_type": "exclusive_write", 00:17:55.985 "zoned": false, 00:17:55.985 "supported_io_types": { 00:17:55.985 "read": true, 00:17:55.985 "write": true, 00:17:55.985 "unmap": true, 00:17:55.985 "flush": true, 00:17:55.985 "reset": true, 00:17:55.985 "nvme_admin": false, 00:17:55.985 "nvme_io": false, 00:17:55.985 "nvme_io_md": false, 00:17:55.985 "write_zeroes": true, 00:17:55.985 "zcopy": true, 00:17:55.985 "get_zone_info": false, 00:17:55.985 "zone_management": false, 00:17:55.985 "zone_append": false, 00:17:55.985 "compare": false, 00:17:55.985 "compare_and_write": false, 00:17:55.985 "abort": true, 00:17:55.985 "seek_hole": false, 00:17:55.985 "seek_data": false, 00:17:55.985 "copy": true, 00:17:55.985 "nvme_iov_md": false 00:17:55.985 }, 00:17:55.985 "memory_domains": [ 00:17:55.985 { 00:17:55.985 "dma_device_id": "system", 00:17:55.985 "dma_device_type": 1 00:17:55.985 }, 00:17:55.985 { 00:17:55.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.985 "dma_device_type": 2 00:17:55.985 } 00:17:55.985 ], 00:17:55.985 "driver_specific": {} 00:17:55.985 } 00:17:55.985 ] 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.985 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.243 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.243 "name": "Existed_Raid", 00:17:56.243 "uuid": "3418a8d4-9171-4628-a40e-72940102e9ce", 00:17:56.243 "strip_size_kb": 0, 00:17:56.243 "state": "configuring", 00:17:56.243 "raid_level": "raid1", 00:17:56.243 "superblock": true, 00:17:56.243 "num_base_bdevs": 2, 00:17:56.243 "num_base_bdevs_discovered": 1, 00:17:56.243 "num_base_bdevs_operational": 2, 00:17:56.243 "base_bdevs_list": [ 00:17:56.243 { 00:17:56.243 "name": "BaseBdev1", 00:17:56.243 "uuid": "84b489e0-419a-4af1-a69e-5d3c9d443bfc", 00:17:56.243 "is_configured": true, 00:17:56.243 "data_offset": 2048, 00:17:56.243 "data_size": 63488 00:17:56.243 }, 00:17:56.243 { 00:17:56.243 "name": "BaseBdev2", 00:17:56.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.243 "is_configured": false, 00:17:56.243 "data_offset": 0, 00:17:56.243 "data_size": 0 00:17:56.243 } 00:17:56.243 ] 00:17:56.243 }' 00:17:56.243 08:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.243 08:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.174 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:57.431 [2024-07-12 08:44:32.401543] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.431 [2024-07-12 08:44:32.401743] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:17:57.431 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:57.689 [2024-07-12 08:44:32.677687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.689 [2024-07-12 08:44:32.680077] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.689 [2024-07-12 08:44:32.680281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.689 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.946 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.946 "name": "Existed_Raid", 00:17:57.946 "uuid": "75db675a-6419-456c-a54c-9176c1dd33a3", 00:17:57.946 "strip_size_kb": 0, 00:17:57.946 "state": "configuring", 00:17:57.946 "raid_level": "raid1", 00:17:57.946 "superblock": true, 00:17:57.946 "num_base_bdevs": 2, 00:17:57.946 "num_base_bdevs_discovered": 1, 00:17:57.946 "num_base_bdevs_operational": 2, 00:17:57.946 "base_bdevs_list": [ 00:17:57.946 { 00:17:57.946 "name": "BaseBdev1", 00:17:57.946 "uuid": "84b489e0-419a-4af1-a69e-5d3c9d443bfc", 00:17:57.946 "is_configured": true, 00:17:57.946 "data_offset": 2048, 00:17:57.946 "data_size": 63488 00:17:57.946 }, 00:17:57.946 { 00:17:57.946 "name": "BaseBdev2", 00:17:57.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.946 "is_configured": false, 00:17:57.946 "data_offset": 0, 00:17:57.946 "data_size": 0 00:17:57.946 } 00:17:57.946 ] 00:17:57.946 }' 00:17:57.946 08:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.946 08:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.511 08:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:59.075 [2024-07-12 08:44:34.015065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:59.075 [2024-07-12 08:44:34.015586] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:59.075 [2024-07-12 08:44:34.015719] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:59.075 [2024-07-12 08:44:34.015907] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:59.075 BaseBdev2 00:17:59.075 [2024-07-12 08:44:34.016464] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:59.075 [2024-07-12 08:44:34.016591] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:59.075 [2024-07-12 08:44:34.016880] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.075 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:59.656 [ 00:17:59.656 { 00:17:59.656 "name": "BaseBdev2", 00:17:59.656 "aliases": [ 00:17:59.656 "b588b2a6-92d7-4ce0-be1c-a6e33593e433" 00:17:59.656 ], 00:17:59.656 "product_name": "Malloc disk", 00:17:59.656 "block_size": 512, 00:17:59.656 "num_blocks": 65536, 00:17:59.656 "uuid": "b588b2a6-92d7-4ce0-be1c-a6e33593e433", 00:17:59.656 "assigned_rate_limits": { 00:17:59.656 "rw_ios_per_sec": 0, 00:17:59.656 "rw_mbytes_per_sec": 0, 00:17:59.656 "r_mbytes_per_sec": 0, 00:17:59.656 "w_mbytes_per_sec": 0 00:17:59.656 }, 00:17:59.656 "claimed": true, 00:17:59.656 "claim_type": "exclusive_write", 00:17:59.656 "zoned": false, 00:17:59.656 "supported_io_types": { 00:17:59.656 "read": true, 00:17:59.656 "write": true, 00:17:59.656 "unmap": true, 00:17:59.656 "flush": true, 00:17:59.656 "reset": true, 00:17:59.656 "nvme_admin": false, 00:17:59.656 "nvme_io": false, 00:17:59.656 "nvme_io_md": false, 00:17:59.656 "write_zeroes": true, 00:17:59.656 "zcopy": true, 00:17:59.656 "get_zone_info": false, 00:17:59.656 "zone_management": false, 00:17:59.656 "zone_append": false, 00:17:59.656 "compare": false, 00:17:59.656 "compare_and_write": false, 00:17:59.656 "abort": true, 00:17:59.656 "seek_hole": false, 00:17:59.656 "seek_data": false, 00:17:59.656 "copy": true, 00:17:59.656 "nvme_iov_md": false 00:17:59.656 }, 00:17:59.656 "memory_domains": [ 00:17:59.656 { 00:17:59.656 "dma_device_id": "system", 00:17:59.656 "dma_device_type": 1 00:17:59.656 }, 00:17:59.656 { 00:17:59.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.656 "dma_device_type": 2 00:17:59.656 } 00:17:59.656 ], 00:17:59.656 "driver_specific": {} 00:17:59.656 } 00:17:59.656 ] 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.656 "name": "Existed_Raid", 00:17:59.656 "uuid": "75db675a-6419-456c-a54c-9176c1dd33a3", 00:17:59.656 "strip_size_kb": 0, 00:17:59.656 "state": "online", 00:17:59.656 "raid_level": "raid1", 00:17:59.656 "superblock": true, 00:17:59.656 "num_base_bdevs": 2, 00:17:59.656 "num_base_bdevs_discovered": 2, 00:17:59.656 "num_base_bdevs_operational": 2, 00:17:59.656 "base_bdevs_list": [ 00:17:59.656 { 00:17:59.656 "name": "BaseBdev1", 00:17:59.656 "uuid": "84b489e0-419a-4af1-a69e-5d3c9d443bfc", 00:17:59.656 "is_configured": true, 00:17:59.656 "data_offset": 2048, 00:17:59.656 "data_size": 63488 00:17:59.656 }, 00:17:59.656 { 00:17:59.656 "name": "BaseBdev2", 00:17:59.656 "uuid": "b588b2a6-92d7-4ce0-be1c-a6e33593e433", 00:17:59.656 "is_configured": true, 00:17:59.656 "data_offset": 2048, 00:17:59.656 "data_size": 63488 00:17:59.656 } 00:17:59.656 ] 00:17:59.656 }' 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.656 08:44:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:00.608 [2024-07-12 08:44:35.775832] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:00.608 "name": "Existed_Raid", 00:18:00.608 "aliases": [ 00:18:00.608 "75db675a-6419-456c-a54c-9176c1dd33a3" 00:18:00.608 ], 00:18:00.608 "product_name": "Raid Volume", 00:18:00.608 "block_size": 512, 00:18:00.608 "num_blocks": 63488, 00:18:00.608 "uuid": "75db675a-6419-456c-a54c-9176c1dd33a3", 00:18:00.608 "assigned_rate_limits": { 00:18:00.608 "rw_ios_per_sec": 0, 00:18:00.608 "rw_mbytes_per_sec": 0, 00:18:00.608 "r_mbytes_per_sec": 0, 00:18:00.608 "w_mbytes_per_sec": 0 00:18:00.608 }, 00:18:00.608 "claimed": false, 00:18:00.608 "zoned": false, 00:18:00.608 "supported_io_types": { 00:18:00.608 "read": true, 00:18:00.608 "write": true, 00:18:00.608 "unmap": false, 00:18:00.608 "flush": false, 00:18:00.608 "reset": true, 00:18:00.608 "nvme_admin": false, 00:18:00.608 "nvme_io": false, 00:18:00.608 "nvme_io_md": false, 00:18:00.608 "write_zeroes": true, 00:18:00.608 "zcopy": false, 00:18:00.608 "get_zone_info": false, 00:18:00.608 "zone_management": false, 00:18:00.608 "zone_append": false, 00:18:00.608 "compare": false, 00:18:00.608 "compare_and_write": false, 00:18:00.608 "abort": false, 00:18:00.608 "seek_hole": false, 00:18:00.608 "seek_data": false, 00:18:00.608 "copy": false, 00:18:00.608 "nvme_iov_md": false 00:18:00.608 }, 00:18:00.608 "memory_domains": [ 00:18:00.608 { 00:18:00.608 "dma_device_id": "system", 00:18:00.608 "dma_device_type": 1 00:18:00.608 }, 00:18:00.608 { 00:18:00.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.608 "dma_device_type": 2 00:18:00.608 }, 00:18:00.608 { 00:18:00.608 "dma_device_id": "system", 00:18:00.608 "dma_device_type": 1 00:18:00.608 }, 00:18:00.608 { 00:18:00.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.608 "dma_device_type": 2 00:18:00.608 } 00:18:00.608 ], 00:18:00.608 "driver_specific": { 00:18:00.608 "raid": { 00:18:00.608 "uuid": "75db675a-6419-456c-a54c-9176c1dd33a3", 00:18:00.608 "strip_size_kb": 0, 00:18:00.608 "state": "online", 00:18:00.608 "raid_level": "raid1", 00:18:00.608 "superblock": true, 00:18:00.608 "num_base_bdevs": 2, 00:18:00.608 "num_base_bdevs_discovered": 2, 00:18:00.608 "num_base_bdevs_operational": 2, 00:18:00.608 "base_bdevs_list": [ 00:18:00.608 { 00:18:00.608 "name": "BaseBdev1", 00:18:00.608 "uuid": "84b489e0-419a-4af1-a69e-5d3c9d443bfc", 00:18:00.608 "is_configured": true, 00:18:00.608 "data_offset": 2048, 00:18:00.608 "data_size": 63488 00:18:00.608 }, 00:18:00.608 { 00:18:00.608 "name": "BaseBdev2", 00:18:00.608 "uuid": "b588b2a6-92d7-4ce0-be1c-a6e33593e433", 00:18:00.608 "is_configured": true, 00:18:00.608 "data_offset": 2048, 00:18:00.608 "data_size": 63488 00:18:00.608 } 00:18:00.608 ] 00:18:00.608 } 00:18:00.608 } 00:18:00.608 }' 00:18:00.608 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.866 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:00.866 BaseBdev2' 00:18:00.866 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.866 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:00.866 08:44:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.124 "name": "BaseBdev1", 00:18:01.124 "aliases": [ 00:18:01.124 "84b489e0-419a-4af1-a69e-5d3c9d443bfc" 00:18:01.124 ], 00:18:01.124 "product_name": "Malloc disk", 00:18:01.124 "block_size": 512, 00:18:01.124 "num_blocks": 65536, 00:18:01.124 "uuid": "84b489e0-419a-4af1-a69e-5d3c9d443bfc", 00:18:01.124 "assigned_rate_limits": { 00:18:01.124 "rw_ios_per_sec": 0, 00:18:01.124 "rw_mbytes_per_sec": 0, 00:18:01.124 "r_mbytes_per_sec": 0, 00:18:01.124 "w_mbytes_per_sec": 0 00:18:01.124 }, 00:18:01.124 "claimed": true, 00:18:01.124 "claim_type": "exclusive_write", 00:18:01.124 "zoned": false, 00:18:01.124 "supported_io_types": { 00:18:01.124 "read": true, 00:18:01.124 "write": true, 00:18:01.124 "unmap": true, 00:18:01.124 "flush": true, 00:18:01.124 "reset": true, 00:18:01.124 "nvme_admin": false, 00:18:01.124 "nvme_io": false, 00:18:01.124 "nvme_io_md": false, 00:18:01.124 "write_zeroes": true, 00:18:01.124 "zcopy": true, 00:18:01.124 "get_zone_info": false, 00:18:01.124 "zone_management": false, 00:18:01.124 "zone_append": false, 00:18:01.124 "compare": false, 00:18:01.124 "compare_and_write": false, 00:18:01.124 "abort": true, 00:18:01.124 "seek_hole": false, 00:18:01.124 "seek_data": false, 00:18:01.124 "copy": true, 00:18:01.124 "nvme_iov_md": false 00:18:01.124 }, 00:18:01.124 "memory_domains": [ 00:18:01.124 { 00:18:01.124 "dma_device_id": "system", 00:18:01.124 "dma_device_type": 1 00:18:01.124 }, 00:18:01.124 { 00:18:01.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.124 "dma_device_type": 2 00:18:01.124 } 00:18:01.124 ], 00:18:01.124 "driver_specific": {} 00:18:01.124 }' 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.124 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:01.382 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.640 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.640 "name": "BaseBdev2", 00:18:01.640 "aliases": [ 00:18:01.640 "b588b2a6-92d7-4ce0-be1c-a6e33593e433" 00:18:01.640 ], 00:18:01.640 "product_name": "Malloc disk", 00:18:01.640 "block_size": 512, 00:18:01.640 "num_blocks": 65536, 00:18:01.640 "uuid": "b588b2a6-92d7-4ce0-be1c-a6e33593e433", 00:18:01.640 "assigned_rate_limits": { 00:18:01.640 "rw_ios_per_sec": 0, 00:18:01.640 "rw_mbytes_per_sec": 0, 00:18:01.640 "r_mbytes_per_sec": 0, 00:18:01.640 "w_mbytes_per_sec": 0 00:18:01.640 }, 00:18:01.640 "claimed": true, 00:18:01.640 "claim_type": "exclusive_write", 00:18:01.640 "zoned": false, 00:18:01.640 "supported_io_types": { 00:18:01.640 "read": true, 00:18:01.640 "write": true, 00:18:01.640 "unmap": true, 00:18:01.640 "flush": true, 00:18:01.640 "reset": true, 00:18:01.640 "nvme_admin": false, 00:18:01.640 "nvme_io": false, 00:18:01.640 "nvme_io_md": false, 00:18:01.640 "write_zeroes": true, 00:18:01.640 "zcopy": true, 00:18:01.640 "get_zone_info": false, 00:18:01.640 "zone_management": false, 00:18:01.640 "zone_append": false, 00:18:01.640 "compare": false, 00:18:01.640 "compare_and_write": false, 00:18:01.640 "abort": true, 00:18:01.640 "seek_hole": false, 00:18:01.640 "seek_data": false, 00:18:01.640 "copy": true, 00:18:01.640 "nvme_iov_md": false 00:18:01.640 }, 00:18:01.640 "memory_domains": [ 00:18:01.640 { 00:18:01.640 "dma_device_id": "system", 00:18:01.640 "dma_device_type": 1 00:18:01.640 }, 00:18:01.640 { 00:18:01.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.640 "dma_device_type": 2 00:18:01.640 } 00:18:01.640 ], 00:18:01.640 "driver_specific": {} 00:18:01.640 }' 00:18:01.640 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.640 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.898 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.898 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.898 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.898 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.898 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.898 08:44:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.898 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.898 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.156 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.156 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.156 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:02.415 [2024-07-12 08:44:37.472010] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.415 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.673 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.673 "name": "Existed_Raid", 00:18:02.673 "uuid": "75db675a-6419-456c-a54c-9176c1dd33a3", 00:18:02.673 "strip_size_kb": 0, 00:18:02.673 "state": "online", 00:18:02.673 "raid_level": "raid1", 00:18:02.673 "superblock": true, 00:18:02.673 "num_base_bdevs": 2, 00:18:02.673 "num_base_bdevs_discovered": 1, 00:18:02.673 "num_base_bdevs_operational": 1, 00:18:02.673 "base_bdevs_list": [ 00:18:02.673 { 00:18:02.673 "name": null, 00:18:02.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.673 "is_configured": false, 00:18:02.673 "data_offset": 2048, 00:18:02.673 "data_size": 63488 00:18:02.673 }, 00:18:02.673 { 00:18:02.673 "name": "BaseBdev2", 00:18:02.673 "uuid": "b588b2a6-92d7-4ce0-be1c-a6e33593e433", 00:18:02.673 "is_configured": true, 00:18:02.673 "data_offset": 2048, 00:18:02.673 "data_size": 63488 00:18:02.673 } 00:18:02.673 ] 00:18:02.673 }' 00:18:02.673 08:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.673 08:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.618 08:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:03.618 08:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:03.618 08:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:03.618 08:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.618 08:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:03.618 08:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.618 08:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:04.184 [2024-07-12 08:44:39.076718] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:04.184 [2024-07-12 08:44:39.077087] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.184 [2024-07-12 08:44:39.160711] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.184 [2024-07-12 08:44:39.160986] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.184 [2024-07-12 08:44:39.161112] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:04.184 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:04.185 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.185 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.185 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 124544 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 124544 ']' 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 124544 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124544 00:18:04.444 killing process with pid 124544 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124544' 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 124544 00:18:04.444 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 124544 00:18:04.444 [2024-07-12 08:44:39.429091] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.444 [2024-07-12 08:44:39.429224] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.374 ************************************ 00:18:05.374 END TEST raid_state_function_test_sb 00:18:05.374 ************************************ 00:18:05.374 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:05.374 00:18:05.374 real 0m13.027s 00:18:05.374 user 0m23.306s 00:18:05.374 sys 0m1.394s 00:18:05.374 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.374 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.632 08:44:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:05.632 08:44:40 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:18:05.632 08:44:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:05.632 08:44:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.632 08:44:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.632 ************************************ 00:18:05.632 START TEST raid_superblock_test 00:18:05.632 ************************************ 00:18:05.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=124941 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 124941 /var/tmp/spdk-raid.sock 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 124941 ']' 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.632 08:44:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.632 [2024-07-12 08:44:40.676955] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:18:05.632 [2024-07-12 08:44:40.677390] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124941 ] 00:18:05.921 [2024-07-12 08:44:40.847523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.921 [2024-07-12 08:44:41.062421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.178 [2024-07-12 08:44:41.278717] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.740 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:06.740 malloc1 00:18:06.997 08:44:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.997 [2024-07-12 08:44:42.177350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.997 [2024-07-12 08:44:42.177633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.997 [2024-07-12 08:44:42.177800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:06.997 [2024-07-12 08:44:42.177920] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.997 [2024-07-12 08:44:42.180590] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.997 [2024-07-12 08:44:42.180756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.997 pt1 00:18:07.255 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:07.255 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:07.255 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:07.256 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:07.256 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:07.256 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:07.256 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:07.256 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:07.256 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:07.535 malloc2 00:18:07.535 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.793 [2024-07-12 08:44:42.742551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.793 [2024-07-12 08:44:42.742880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.793 [2024-07-12 08:44:42.742975] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:07.793 [2024-07-12 08:44:42.743241] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.793 [2024-07-12 08:44:42.745871] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.793 [2024-07-12 08:44:42.746039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.793 pt2 00:18:07.793 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:07.793 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:07.793 08:44:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:08.054 [2024-07-12 08:44:43.018684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:08.054 [2024-07-12 08:44:43.021067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.054 [2024-07-12 08:44:43.021423] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:18:08.054 [2024-07-12 08:44:43.021550] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:08.054 [2024-07-12 08:44:43.021814] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:08.054 [2024-07-12 08:44:43.022356] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:18:08.054 [2024-07-12 08:44:43.022475] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:18:08.054 [2024-07-12 08:44:43.022811] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.054 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.313 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.313 "name": "raid_bdev1", 00:18:08.313 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:08.313 "strip_size_kb": 0, 00:18:08.313 "state": "online", 00:18:08.313 "raid_level": "raid1", 00:18:08.313 "superblock": true, 00:18:08.313 "num_base_bdevs": 2, 00:18:08.313 "num_base_bdevs_discovered": 2, 00:18:08.313 "num_base_bdevs_operational": 2, 00:18:08.313 "base_bdevs_list": [ 00:18:08.313 { 00:18:08.313 "name": "pt1", 00:18:08.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.313 "is_configured": true, 00:18:08.313 "data_offset": 2048, 00:18:08.313 "data_size": 63488 00:18:08.313 }, 00:18:08.313 { 00:18:08.313 "name": "pt2", 00:18:08.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.313 "is_configured": true, 00:18:08.313 "data_offset": 2048, 00:18:08.313 "data_size": 63488 00:18:08.313 } 00:18:08.313 ] 00:18:08.313 }' 00:18:08.313 08:44:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.313 08:44:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:08.879 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:09.137 [2024-07-12 08:44:44.307353] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.138 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:09.138 "name": "raid_bdev1", 00:18:09.138 "aliases": [ 00:18:09.138 "483f74df-e59b-4242-a22d-ba6e63077c29" 00:18:09.138 ], 00:18:09.138 "product_name": "Raid Volume", 00:18:09.138 "block_size": 512, 00:18:09.138 "num_blocks": 63488, 00:18:09.138 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:09.138 "assigned_rate_limits": { 00:18:09.138 "rw_ios_per_sec": 0, 00:18:09.138 "rw_mbytes_per_sec": 0, 00:18:09.138 "r_mbytes_per_sec": 0, 00:18:09.138 "w_mbytes_per_sec": 0 00:18:09.138 }, 00:18:09.138 "claimed": false, 00:18:09.138 "zoned": false, 00:18:09.138 "supported_io_types": { 00:18:09.138 "read": true, 00:18:09.138 "write": true, 00:18:09.138 "unmap": false, 00:18:09.138 "flush": false, 00:18:09.138 "reset": true, 00:18:09.138 "nvme_admin": false, 00:18:09.138 "nvme_io": false, 00:18:09.138 "nvme_io_md": false, 00:18:09.138 "write_zeroes": true, 00:18:09.138 "zcopy": false, 00:18:09.138 "get_zone_info": false, 00:18:09.138 "zone_management": false, 00:18:09.138 "zone_append": false, 00:18:09.138 "compare": false, 00:18:09.138 "compare_and_write": false, 00:18:09.138 "abort": false, 00:18:09.138 "seek_hole": false, 00:18:09.138 "seek_data": false, 00:18:09.138 "copy": false, 00:18:09.138 "nvme_iov_md": false 00:18:09.138 }, 00:18:09.138 "memory_domains": [ 00:18:09.138 { 00:18:09.138 "dma_device_id": "system", 00:18:09.138 "dma_device_type": 1 00:18:09.138 }, 00:18:09.138 { 00:18:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.138 "dma_device_type": 2 00:18:09.138 }, 00:18:09.138 { 00:18:09.138 "dma_device_id": "system", 00:18:09.138 "dma_device_type": 1 00:18:09.138 }, 00:18:09.138 { 00:18:09.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.138 "dma_device_type": 2 00:18:09.138 } 00:18:09.138 ], 00:18:09.138 "driver_specific": { 00:18:09.138 "raid": { 00:18:09.138 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:09.138 "strip_size_kb": 0, 00:18:09.138 "state": "online", 00:18:09.138 "raid_level": "raid1", 00:18:09.138 "superblock": true, 00:18:09.138 "num_base_bdevs": 2, 00:18:09.138 "num_base_bdevs_discovered": 2, 00:18:09.138 "num_base_bdevs_operational": 2, 00:18:09.138 "base_bdevs_list": [ 00:18:09.138 { 00:18:09.138 "name": "pt1", 00:18:09.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.138 "is_configured": true, 00:18:09.138 "data_offset": 2048, 00:18:09.138 "data_size": 63488 00:18:09.138 }, 00:18:09.138 { 00:18:09.138 "name": "pt2", 00:18:09.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.138 "is_configured": true, 00:18:09.138 "data_offset": 2048, 00:18:09.138 "data_size": 63488 00:18:09.138 } 00:18:09.138 ] 00:18:09.138 } 00:18:09.138 } 00:18:09.138 }' 00:18:09.138 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.396 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:09.396 pt2' 00:18:09.396 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.396 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:09.396 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:09.655 "name": "pt1", 00:18:09.655 "aliases": [ 00:18:09.655 "00000000-0000-0000-0000-000000000001" 00:18:09.655 ], 00:18:09.655 "product_name": "passthru", 00:18:09.655 "block_size": 512, 00:18:09.655 "num_blocks": 65536, 00:18:09.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.655 "assigned_rate_limits": { 00:18:09.655 "rw_ios_per_sec": 0, 00:18:09.655 "rw_mbytes_per_sec": 0, 00:18:09.655 "r_mbytes_per_sec": 0, 00:18:09.655 "w_mbytes_per_sec": 0 00:18:09.655 }, 00:18:09.655 "claimed": true, 00:18:09.655 "claim_type": "exclusive_write", 00:18:09.655 "zoned": false, 00:18:09.655 "supported_io_types": { 00:18:09.655 "read": true, 00:18:09.655 "write": true, 00:18:09.655 "unmap": true, 00:18:09.655 "flush": true, 00:18:09.655 "reset": true, 00:18:09.655 "nvme_admin": false, 00:18:09.655 "nvme_io": false, 00:18:09.655 "nvme_io_md": false, 00:18:09.655 "write_zeroes": true, 00:18:09.655 "zcopy": true, 00:18:09.655 "get_zone_info": false, 00:18:09.655 "zone_management": false, 00:18:09.655 "zone_append": false, 00:18:09.655 "compare": false, 00:18:09.655 "compare_and_write": false, 00:18:09.655 "abort": true, 00:18:09.655 "seek_hole": false, 00:18:09.655 "seek_data": false, 00:18:09.655 "copy": true, 00:18:09.655 "nvme_iov_md": false 00:18:09.655 }, 00:18:09.655 "memory_domains": [ 00:18:09.655 { 00:18:09.655 "dma_device_id": "system", 00:18:09.655 "dma_device_type": 1 00:18:09.655 }, 00:18:09.655 { 00:18:09.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.655 "dma_device_type": 2 00:18:09.655 } 00:18:09.655 ], 00:18:09.655 "driver_specific": { 00:18:09.655 "passthru": { 00:18:09.655 "name": "pt1", 00:18:09.655 "base_bdev_name": "malloc1" 00:18:09.655 } 00:18:09.655 } 00:18:09.655 }' 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:09.655 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.915 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.915 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:09.915 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.915 08:44:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.915 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:09.915 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.915 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:09.915 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.173 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.173 "name": "pt2", 00:18:10.173 "aliases": [ 00:18:10.173 "00000000-0000-0000-0000-000000000002" 00:18:10.173 ], 00:18:10.173 "product_name": "passthru", 00:18:10.173 "block_size": 512, 00:18:10.173 "num_blocks": 65536, 00:18:10.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.173 "assigned_rate_limits": { 00:18:10.173 "rw_ios_per_sec": 0, 00:18:10.173 "rw_mbytes_per_sec": 0, 00:18:10.173 "r_mbytes_per_sec": 0, 00:18:10.173 "w_mbytes_per_sec": 0 00:18:10.173 }, 00:18:10.173 "claimed": true, 00:18:10.173 "claim_type": "exclusive_write", 00:18:10.173 "zoned": false, 00:18:10.173 "supported_io_types": { 00:18:10.173 "read": true, 00:18:10.173 "write": true, 00:18:10.173 "unmap": true, 00:18:10.173 "flush": true, 00:18:10.173 "reset": true, 00:18:10.173 "nvme_admin": false, 00:18:10.173 "nvme_io": false, 00:18:10.173 "nvme_io_md": false, 00:18:10.173 "write_zeroes": true, 00:18:10.173 "zcopy": true, 00:18:10.173 "get_zone_info": false, 00:18:10.173 "zone_management": false, 00:18:10.173 "zone_append": false, 00:18:10.173 "compare": false, 00:18:10.173 "compare_and_write": false, 00:18:10.173 "abort": true, 00:18:10.173 "seek_hole": false, 00:18:10.173 "seek_data": false, 00:18:10.173 "copy": true, 00:18:10.173 "nvme_iov_md": false 00:18:10.173 }, 00:18:10.173 "memory_domains": [ 00:18:10.173 { 00:18:10.173 "dma_device_id": "system", 00:18:10.173 "dma_device_type": 1 00:18:10.173 }, 00:18:10.173 { 00:18:10.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.173 "dma_device_type": 2 00:18:10.173 } 00:18:10.173 ], 00:18:10.173 "driver_specific": { 00:18:10.173 "passthru": { 00:18:10.173 "name": "pt2", 00:18:10.173 "base_bdev_name": "malloc2" 00:18:10.173 } 00:18:10.173 } 00:18:10.173 }' 00:18:10.173 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.173 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:10.431 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.689 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.689 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:10.689 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:10.689 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:10.946 [2024-07-12 08:44:45.927719] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.946 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=483f74df-e59b-4242-a22d-ba6e63077c29 00:18:10.946 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 483f74df-e59b-4242-a22d-ba6e63077c29 ']' 00:18:10.946 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:11.205 [2024-07-12 08:44:46.187458] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.205 [2024-07-12 08:44:46.187695] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.205 [2024-07-12 08:44:46.187934] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.205 [2024-07-12 08:44:46.188107] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.205 [2024-07-12 08:44:46.188224] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:18:11.205 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.205 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:11.463 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:11.463 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:11.463 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.463 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:11.721 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:11.721 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:11.979 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:11.979 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:12.237 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:12.494 [2024-07-12 08:44:47.491714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:12.494 [2024-07-12 08:44:47.494134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:12.494 [2024-07-12 08:44:47.494354] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:12.494 [2024-07-12 08:44:47.494590] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:12.494 [2024-07-12 08:44:47.494739] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.494 [2024-07-12 08:44:47.494841] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:18:12.494 request: 00:18:12.494 { 00:18:12.494 "name": "raid_bdev1", 00:18:12.494 "raid_level": "raid1", 00:18:12.494 "base_bdevs": [ 00:18:12.494 "malloc1", 00:18:12.494 "malloc2" 00:18:12.494 ], 00:18:12.494 "superblock": false, 00:18:12.494 "method": "bdev_raid_create", 00:18:12.494 "req_id": 1 00:18:12.494 } 00:18:12.494 Got JSON-RPC error response 00:18:12.494 response: 00:18:12.494 { 00:18:12.494 "code": -17, 00:18:12.494 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:12.494 } 00:18:12.494 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:12.494 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:12.494 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:12.494 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:12.494 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.494 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:12.752 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:12.752 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:12.752 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.009 [2024-07-12 08:44:47.983769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.009 [2024-07-12 08:44:47.984070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.009 [2024-07-12 08:44:47.984139] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:13.009 [2024-07-12 08:44:47.984383] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.010 [2024-07-12 08:44:47.986984] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.010 [2024-07-12 08:44:47.987183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.010 [2024-07-12 08:44:47.987440] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:13.010 [2024-07-12 08:44:47.987621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.010 pt1 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.010 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.268 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.268 "name": "raid_bdev1", 00:18:13.268 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:13.268 "strip_size_kb": 0, 00:18:13.268 "state": "configuring", 00:18:13.268 "raid_level": "raid1", 00:18:13.268 "superblock": true, 00:18:13.268 "num_base_bdevs": 2, 00:18:13.268 "num_base_bdevs_discovered": 1, 00:18:13.268 "num_base_bdevs_operational": 2, 00:18:13.268 "base_bdevs_list": [ 00:18:13.268 { 00:18:13.268 "name": "pt1", 00:18:13.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.268 "is_configured": true, 00:18:13.268 "data_offset": 2048, 00:18:13.268 "data_size": 63488 00:18:13.268 }, 00:18:13.268 { 00:18:13.268 "name": null, 00:18:13.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.268 "is_configured": false, 00:18:13.268 "data_offset": 2048, 00:18:13.268 "data_size": 63488 00:18:13.268 } 00:18:13.268 ] 00:18:13.268 }' 00:18:13.268 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.268 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.834 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:13.834 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:13.834 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:13.834 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.092 [2024-07-12 08:44:49.212237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.092 [2024-07-12 08:44:49.212621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.092 [2024-07-12 08:44:49.212700] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:14.092 [2024-07-12 08:44:49.212950] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.092 [2024-07-12 08:44:49.213551] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.092 [2024-07-12 08:44:49.213643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.092 [2024-07-12 08:44:49.213787] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:14.093 [2024-07-12 08:44:49.213845] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.093 [2024-07-12 08:44:49.214005] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:14.093 [2024-07-12 08:44:49.214047] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:14.093 [2024-07-12 08:44:49.214184] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:14.093 [2024-07-12 08:44:49.214666] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:14.093 [2024-07-12 08:44:49.214793] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:14.093 [2024-07-12 08:44:49.215060] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.093 pt2 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.093 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.351 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.351 "name": "raid_bdev1", 00:18:14.351 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:14.351 "strip_size_kb": 0, 00:18:14.351 "state": "online", 00:18:14.351 "raid_level": "raid1", 00:18:14.351 "superblock": true, 00:18:14.351 "num_base_bdevs": 2, 00:18:14.351 "num_base_bdevs_discovered": 2, 00:18:14.351 "num_base_bdevs_operational": 2, 00:18:14.351 "base_bdevs_list": [ 00:18:14.351 { 00:18:14.351 "name": "pt1", 00:18:14.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.351 "is_configured": true, 00:18:14.351 "data_offset": 2048, 00:18:14.351 "data_size": 63488 00:18:14.351 }, 00:18:14.351 { 00:18:14.351 "name": "pt2", 00:18:14.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.351 "is_configured": true, 00:18:14.351 "data_offset": 2048, 00:18:14.351 "data_size": 63488 00:18:14.351 } 00:18:14.351 ] 00:18:14.351 }' 00:18:14.351 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.351 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:15.286 [2024-07-12 08:44:50.368774] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.286 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:15.286 "name": "raid_bdev1", 00:18:15.286 "aliases": [ 00:18:15.286 "483f74df-e59b-4242-a22d-ba6e63077c29" 00:18:15.286 ], 00:18:15.286 "product_name": "Raid Volume", 00:18:15.286 "block_size": 512, 00:18:15.286 "num_blocks": 63488, 00:18:15.286 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:15.286 "assigned_rate_limits": { 00:18:15.287 "rw_ios_per_sec": 0, 00:18:15.287 "rw_mbytes_per_sec": 0, 00:18:15.287 "r_mbytes_per_sec": 0, 00:18:15.287 "w_mbytes_per_sec": 0 00:18:15.287 }, 00:18:15.287 "claimed": false, 00:18:15.287 "zoned": false, 00:18:15.287 "supported_io_types": { 00:18:15.287 "read": true, 00:18:15.287 "write": true, 00:18:15.287 "unmap": false, 00:18:15.287 "flush": false, 00:18:15.287 "reset": true, 00:18:15.287 "nvme_admin": false, 00:18:15.287 "nvme_io": false, 00:18:15.287 "nvme_io_md": false, 00:18:15.287 "write_zeroes": true, 00:18:15.287 "zcopy": false, 00:18:15.287 "get_zone_info": false, 00:18:15.287 "zone_management": false, 00:18:15.287 "zone_append": false, 00:18:15.287 "compare": false, 00:18:15.287 "compare_and_write": false, 00:18:15.287 "abort": false, 00:18:15.287 "seek_hole": false, 00:18:15.287 "seek_data": false, 00:18:15.287 "copy": false, 00:18:15.287 "nvme_iov_md": false 00:18:15.287 }, 00:18:15.287 "memory_domains": [ 00:18:15.287 { 00:18:15.287 "dma_device_id": "system", 00:18:15.287 "dma_device_type": 1 00:18:15.287 }, 00:18:15.287 { 00:18:15.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.287 "dma_device_type": 2 00:18:15.287 }, 00:18:15.287 { 00:18:15.287 "dma_device_id": "system", 00:18:15.287 "dma_device_type": 1 00:18:15.287 }, 00:18:15.287 { 00:18:15.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.287 "dma_device_type": 2 00:18:15.287 } 00:18:15.287 ], 00:18:15.287 "driver_specific": { 00:18:15.287 "raid": { 00:18:15.287 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:15.287 "strip_size_kb": 0, 00:18:15.287 "state": "online", 00:18:15.287 "raid_level": "raid1", 00:18:15.287 "superblock": true, 00:18:15.287 "num_base_bdevs": 2, 00:18:15.287 "num_base_bdevs_discovered": 2, 00:18:15.287 "num_base_bdevs_operational": 2, 00:18:15.287 "base_bdevs_list": [ 00:18:15.287 { 00:18:15.287 "name": "pt1", 00:18:15.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.287 "is_configured": true, 00:18:15.287 "data_offset": 2048, 00:18:15.287 "data_size": 63488 00:18:15.287 }, 00:18:15.287 { 00:18:15.287 "name": "pt2", 00:18:15.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.287 "is_configured": true, 00:18:15.287 "data_offset": 2048, 00:18:15.287 "data_size": 63488 00:18:15.287 } 00:18:15.287 ] 00:18:15.287 } 00:18:15.287 } 00:18:15.287 }' 00:18:15.287 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.287 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:15.287 pt2' 00:18:15.287 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.287 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:15.287 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.545 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.545 "name": "pt1", 00:18:15.545 "aliases": [ 00:18:15.545 "00000000-0000-0000-0000-000000000001" 00:18:15.545 ], 00:18:15.545 "product_name": "passthru", 00:18:15.545 "block_size": 512, 00:18:15.545 "num_blocks": 65536, 00:18:15.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.545 "assigned_rate_limits": { 00:18:15.545 "rw_ios_per_sec": 0, 00:18:15.545 "rw_mbytes_per_sec": 0, 00:18:15.545 "r_mbytes_per_sec": 0, 00:18:15.545 "w_mbytes_per_sec": 0 00:18:15.545 }, 00:18:15.545 "claimed": true, 00:18:15.545 "claim_type": "exclusive_write", 00:18:15.545 "zoned": false, 00:18:15.545 "supported_io_types": { 00:18:15.545 "read": true, 00:18:15.545 "write": true, 00:18:15.545 "unmap": true, 00:18:15.545 "flush": true, 00:18:15.545 "reset": true, 00:18:15.545 "nvme_admin": false, 00:18:15.545 "nvme_io": false, 00:18:15.545 "nvme_io_md": false, 00:18:15.545 "write_zeroes": true, 00:18:15.545 "zcopy": true, 00:18:15.545 "get_zone_info": false, 00:18:15.545 "zone_management": false, 00:18:15.545 "zone_append": false, 00:18:15.545 "compare": false, 00:18:15.545 "compare_and_write": false, 00:18:15.545 "abort": true, 00:18:15.545 "seek_hole": false, 00:18:15.545 "seek_data": false, 00:18:15.545 "copy": true, 00:18:15.545 "nvme_iov_md": false 00:18:15.545 }, 00:18:15.545 "memory_domains": [ 00:18:15.545 { 00:18:15.545 "dma_device_id": "system", 00:18:15.545 "dma_device_type": 1 00:18:15.545 }, 00:18:15.545 { 00:18:15.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.545 "dma_device_type": 2 00:18:15.545 } 00:18:15.545 ], 00:18:15.545 "driver_specific": { 00:18:15.545 "passthru": { 00:18:15.545 "name": "pt1", 00:18:15.545 "base_bdev_name": "malloc1" 00:18:15.545 } 00:18:15.545 } 00:18:15.545 }' 00:18:15.545 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.803 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.803 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.803 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.803 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.803 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.803 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:16.100 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:16.370 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:16.370 "name": "pt2", 00:18:16.370 "aliases": [ 00:18:16.370 "00000000-0000-0000-0000-000000000002" 00:18:16.370 ], 00:18:16.370 "product_name": "passthru", 00:18:16.370 "block_size": 512, 00:18:16.370 "num_blocks": 65536, 00:18:16.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.370 "assigned_rate_limits": { 00:18:16.370 "rw_ios_per_sec": 0, 00:18:16.370 "rw_mbytes_per_sec": 0, 00:18:16.370 "r_mbytes_per_sec": 0, 00:18:16.370 "w_mbytes_per_sec": 0 00:18:16.370 }, 00:18:16.370 "claimed": true, 00:18:16.370 "claim_type": "exclusive_write", 00:18:16.370 "zoned": false, 00:18:16.370 "supported_io_types": { 00:18:16.370 "read": true, 00:18:16.370 "write": true, 00:18:16.370 "unmap": true, 00:18:16.370 "flush": true, 00:18:16.370 "reset": true, 00:18:16.370 "nvme_admin": false, 00:18:16.370 "nvme_io": false, 00:18:16.370 "nvme_io_md": false, 00:18:16.370 "write_zeroes": true, 00:18:16.370 "zcopy": true, 00:18:16.370 "get_zone_info": false, 00:18:16.370 "zone_management": false, 00:18:16.370 "zone_append": false, 00:18:16.370 "compare": false, 00:18:16.370 "compare_and_write": false, 00:18:16.370 "abort": true, 00:18:16.370 "seek_hole": false, 00:18:16.370 "seek_data": false, 00:18:16.370 "copy": true, 00:18:16.370 "nvme_iov_md": false 00:18:16.370 }, 00:18:16.370 "memory_domains": [ 00:18:16.370 { 00:18:16.370 "dma_device_id": "system", 00:18:16.370 "dma_device_type": 1 00:18:16.370 }, 00:18:16.370 { 00:18:16.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.370 "dma_device_type": 2 00:18:16.370 } 00:18:16.370 ], 00:18:16.370 "driver_specific": { 00:18:16.370 "passthru": { 00:18:16.370 "name": "pt2", 00:18:16.370 "base_bdev_name": "malloc2" 00:18:16.370 } 00:18:16.370 } 00:18:16.370 }' 00:18:16.370 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.370 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.630 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.889 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.889 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.889 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:16.889 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:17.147 [2024-07-12 08:44:52.133213] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.147 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 483f74df-e59b-4242-a22d-ba6e63077c29 '!=' 483f74df-e59b-4242-a22d-ba6e63077c29 ']' 00:18:17.147 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:17.147 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:17.147 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:17.147 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:17.407 [2024-07-12 08:44:52.365016] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.407 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.666 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.666 "name": "raid_bdev1", 00:18:17.666 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:17.666 "strip_size_kb": 0, 00:18:17.666 "state": "online", 00:18:17.666 "raid_level": "raid1", 00:18:17.666 "superblock": true, 00:18:17.666 "num_base_bdevs": 2, 00:18:17.666 "num_base_bdevs_discovered": 1, 00:18:17.666 "num_base_bdevs_operational": 1, 00:18:17.666 "base_bdevs_list": [ 00:18:17.666 { 00:18:17.666 "name": null, 00:18:17.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.666 "is_configured": false, 00:18:17.666 "data_offset": 2048, 00:18:17.666 "data_size": 63488 00:18:17.666 }, 00:18:17.666 { 00:18:17.666 "name": "pt2", 00:18:17.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.666 "is_configured": true, 00:18:17.666 "data_offset": 2048, 00:18:17.666 "data_size": 63488 00:18:17.666 } 00:18:17.666 ] 00:18:17.666 }' 00:18:17.666 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.666 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.232 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:18.489 [2024-07-12 08:44:53.625264] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.489 [2024-07-12 08:44:53.625477] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.489 [2024-07-12 08:44:53.625666] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.489 [2024-07-12 08:44:53.625851] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.489 [2024-07-12 08:44:53.625968] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:18.489 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.489 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:18.747 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:18.747 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:18.747 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:18.747 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:18.747 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:19.005 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:19.005 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:19.005 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:19.005 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:19.005 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:18:19.005 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.263 [2024-07-12 08:44:54.369389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.263 [2024-07-12 08:44:54.369692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.263 [2024-07-12 08:44:54.369762] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:19.263 [2024-07-12 08:44:54.369979] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.263 [2024-07-12 08:44:54.372646] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.263 [2024-07-12 08:44:54.372833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.263 [2024-07-12 08:44:54.373091] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:19.263 [2024-07-12 08:44:54.373272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.263 [2024-07-12 08:44:54.373520] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:18:19.263 [2024-07-12 08:44:54.373641] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:19.263 [2024-07-12 08:44:54.373785] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:19.263 [2024-07-12 08:44:54.374248] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:18:19.263 [2024-07-12 08:44:54.374365] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:18:19.263 [2024-07-12 08:44:54.374657] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.263 pt2 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.263 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.522 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.522 "name": "raid_bdev1", 00:18:19.522 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:19.522 "strip_size_kb": 0, 00:18:19.522 "state": "online", 00:18:19.522 "raid_level": "raid1", 00:18:19.522 "superblock": true, 00:18:19.522 "num_base_bdevs": 2, 00:18:19.522 "num_base_bdevs_discovered": 1, 00:18:19.522 "num_base_bdevs_operational": 1, 00:18:19.522 "base_bdevs_list": [ 00:18:19.522 { 00:18:19.522 "name": null, 00:18:19.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.522 "is_configured": false, 00:18:19.522 "data_offset": 2048, 00:18:19.522 "data_size": 63488 00:18:19.522 }, 00:18:19.522 { 00:18:19.522 "name": "pt2", 00:18:19.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.522 "is_configured": true, 00:18:19.522 "data_offset": 2048, 00:18:19.522 "data_size": 63488 00:18:19.522 } 00:18:19.522 ] 00:18:19.522 }' 00:18:19.522 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.522 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.457 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:20.457 [2024-07-12 08:44:55.629911] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.457 [2024-07-12 08:44:55.630073] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.457 [2024-07-12 08:44:55.630260] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.457 [2024-07-12 08:44:55.630414] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.457 [2024-07-12 08:44:55.630515] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:18:20.457 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.457 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:20.716 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:20.716 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:20.716 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:20.716 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.974 [2024-07-12 08:44:56.106003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.974 [2024-07-12 08:44:56.106283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.974 [2024-07-12 08:44:56.106366] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:20.974 [2024-07-12 08:44:56.106625] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.974 [2024-07-12 08:44:56.109197] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.974 [2024-07-12 08:44:56.109384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.974 [2024-07-12 08:44:56.109606] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:20.974 [2024-07-12 08:44:56.109768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.974 [2024-07-12 08:44:56.110039] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:20.974 [2024-07-12 08:44:56.110160] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.974 [2024-07-12 08:44:56.110225] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:18:20.974 [2024-07-12 08:44:56.110509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.974 [2024-07-12 08:44:56.110755] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:18:20.974 [2024-07-12 08:44:56.110872] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:20.974 pt1 00:18:20.974 [2024-07-12 08:44:56.111036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:20.974 [2024-07-12 08:44:56.111415] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:18:20.974 [2024-07-12 08:44:56.111535] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:18:20.974 [2024-07-12 08:44:56.111776] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.974 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.975 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.975 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.975 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.975 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.233 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.233 "name": "raid_bdev1", 00:18:21.233 "uuid": "483f74df-e59b-4242-a22d-ba6e63077c29", 00:18:21.233 "strip_size_kb": 0, 00:18:21.233 "state": "online", 00:18:21.233 "raid_level": "raid1", 00:18:21.233 "superblock": true, 00:18:21.233 "num_base_bdevs": 2, 00:18:21.233 "num_base_bdevs_discovered": 1, 00:18:21.233 "num_base_bdevs_operational": 1, 00:18:21.233 "base_bdevs_list": [ 00:18:21.233 { 00:18:21.233 "name": null, 00:18:21.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.233 "is_configured": false, 00:18:21.233 "data_offset": 2048, 00:18:21.233 "data_size": 63488 00:18:21.233 }, 00:18:21.233 { 00:18:21.233 "name": "pt2", 00:18:21.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.233 "is_configured": true, 00:18:21.233 "data_offset": 2048, 00:18:21.233 "data_size": 63488 00:18:21.233 } 00:18:21.233 ] 00:18:21.233 }' 00:18:21.233 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.233 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.168 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:22.168 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:22.427 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:22.427 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:22.427 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:22.686 [2024-07-12 08:44:57.630757] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 483f74df-e59b-4242-a22d-ba6e63077c29 '!=' 483f74df-e59b-4242-a22d-ba6e63077c29 ']' 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 124941 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 124941 ']' 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 124941 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124941 00:18:22.686 killing process with pid 124941 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124941' 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 124941 00:18:22.686 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 124941 00:18:22.686 [2024-07-12 08:44:57.668925] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.686 [2024-07-12 08:44:57.669014] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.686 [2024-07-12 08:44:57.669071] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.686 [2024-07-12 08:44:57.669083] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:18:22.686 [2024-07-12 08:44:57.833525] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.057 ************************************ 00:18:24.057 END TEST raid_superblock_test 00:18:24.057 ************************************ 00:18:24.057 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:24.057 00:18:24.057 real 0m18.347s 00:18:24.057 user 0m33.877s 00:18:24.057 sys 0m1.994s 00:18:24.057 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:24.057 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.057 08:44:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:24.057 08:44:58 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:18:24.057 08:44:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:24.057 08:44:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.057 08:44:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.057 ************************************ 00:18:24.057 START TEST raid_read_error_test 00:18:24.057 ************************************ 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6EUwfXX6cc 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=125535 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 125535 /var/tmp/spdk-raid.sock 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 125535 ']' 00:18:24.057 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:24.058 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.058 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:24.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:24.058 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.058 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.058 [2024-07-12 08:44:59.095353] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:18:24.058 [2024-07-12 08:44:59.095722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125535 ] 00:18:24.315 [2024-07-12 08:44:59.264140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.573 [2024-07-12 08:44:59.537127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.573 [2024-07-12 08:44:59.751906] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.139 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.139 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:25.139 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:25.139 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:25.397 BaseBdev1_malloc 00:18:25.397 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:25.654 true 00:18:25.654 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:25.911 [2024-07-12 08:45:00.959428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:25.911 [2024-07-12 08:45:00.959728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.911 [2024-07-12 08:45:00.959884] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:25.911 [2024-07-12 08:45:00.960001] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.911 [2024-07-12 08:45:00.962653] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.911 [2024-07-12 08:45:00.962811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:25.911 BaseBdev1 00:18:25.911 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:25.911 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:26.168 BaseBdev2_malloc 00:18:26.168 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:26.426 true 00:18:26.426 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:26.683 [2024-07-12 08:45:01.747684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:26.683 [2024-07-12 08:45:01.748070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.683 [2024-07-12 08:45:01.748225] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:26.683 [2024-07-12 08:45:01.748393] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.683 [2024-07-12 08:45:01.751000] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.683 [2024-07-12 08:45:01.751160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:26.683 BaseBdev2 00:18:26.683 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:26.941 [2024-07-12 08:45:01.979788] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.941 [2024-07-12 08:45:01.982251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.941 [2024-07-12 08:45:01.982651] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:26.941 [2024-07-12 08:45:01.982779] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:26.941 [2024-07-12 08:45:01.982953] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:26.941 [2024-07-12 08:45:01.983525] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:26.941 [2024-07-12 08:45:01.983642] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:26.941 [2024-07-12 08:45:01.983958] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.941 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.198 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.198 "name": "raid_bdev1", 00:18:27.198 "uuid": "cd916395-8323-4e6e-90ed-468d1c1664bf", 00:18:27.198 "strip_size_kb": 0, 00:18:27.198 "state": "online", 00:18:27.199 "raid_level": "raid1", 00:18:27.199 "superblock": true, 00:18:27.199 "num_base_bdevs": 2, 00:18:27.199 "num_base_bdevs_discovered": 2, 00:18:27.199 "num_base_bdevs_operational": 2, 00:18:27.199 "base_bdevs_list": [ 00:18:27.199 { 00:18:27.199 "name": "BaseBdev1", 00:18:27.199 "uuid": "a53421a6-6fbf-5fb5-8f4f-2c75d76a86e4", 00:18:27.199 "is_configured": true, 00:18:27.199 "data_offset": 2048, 00:18:27.199 "data_size": 63488 00:18:27.199 }, 00:18:27.199 { 00:18:27.199 "name": "BaseBdev2", 00:18:27.199 "uuid": "757ccab3-36c6-55e1-bbe9-221f7721bca9", 00:18:27.199 "is_configured": true, 00:18:27.199 "data_offset": 2048, 00:18:27.199 "data_size": 63488 00:18:27.199 } 00:18:27.199 ] 00:18:27.199 }' 00:18:27.199 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.199 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.763 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:27.763 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:28.021 [2024-07-12 08:45:03.033516] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:28.956 08:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.215 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.474 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.474 "name": "raid_bdev1", 00:18:29.474 "uuid": "cd916395-8323-4e6e-90ed-468d1c1664bf", 00:18:29.474 "strip_size_kb": 0, 00:18:29.474 "state": "online", 00:18:29.474 "raid_level": "raid1", 00:18:29.474 "superblock": true, 00:18:29.474 "num_base_bdevs": 2, 00:18:29.474 "num_base_bdevs_discovered": 2, 00:18:29.474 "num_base_bdevs_operational": 2, 00:18:29.474 "base_bdevs_list": [ 00:18:29.474 { 00:18:29.474 "name": "BaseBdev1", 00:18:29.474 "uuid": "a53421a6-6fbf-5fb5-8f4f-2c75d76a86e4", 00:18:29.474 "is_configured": true, 00:18:29.474 "data_offset": 2048, 00:18:29.474 "data_size": 63488 00:18:29.474 }, 00:18:29.474 { 00:18:29.474 "name": "BaseBdev2", 00:18:29.474 "uuid": "757ccab3-36c6-55e1-bbe9-221f7721bca9", 00:18:29.474 "is_configured": true, 00:18:29.474 "data_offset": 2048, 00:18:29.474 "data_size": 63488 00:18:29.474 } 00:18:29.474 ] 00:18:29.474 }' 00:18:29.474 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.474 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.044 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:30.302 [2024-07-12 08:45:05.403696] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.302 [2024-07-12 08:45:05.403902] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.302 [2024-07-12 08:45:05.407009] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.302 [2024-07-12 08:45:05.407162] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.302 [2024-07-12 08:45:05.407287] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.302 [2024-07-12 08:45:05.407458] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:30.302 0 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 125535 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 125535 ']' 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 125535 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125535 00:18:30.302 killing process with pid 125535 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125535' 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 125535 00:18:30.302 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 125535 00:18:30.302 [2024-07-12 08:45:05.449528] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.560 [2024-07-12 08:45:05.560318] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.931 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:31.931 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6EUwfXX6cc 00:18:31.931 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:31.932 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:31.932 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:31.932 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:31.932 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:31.932 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:31.932 00:18:31.932 real 0m7.733s 00:18:31.932 user 0m11.868s 00:18:31.932 sys 0m0.807s 00:18:31.932 08:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.932 ************************************ 00:18:31.932 END TEST raid_read_error_test 00:18:31.932 ************************************ 00:18:31.932 08:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.932 08:45:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:31.932 08:45:06 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:18:31.932 08:45:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:31.932 08:45:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.932 08:45:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.932 ************************************ 00:18:31.932 START TEST raid_write_error_test 00:18:31.932 ************************************ 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.nlExLl05d5 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=125753 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 125753 /var/tmp/spdk-raid.sock 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 125753 ']' 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:31.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.932 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.932 [2024-07-12 08:45:06.862854] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:18:31.932 [2024-07-12 08:45:06.863260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125753 ] 00:18:31.932 [2024-07-12 08:45:07.021285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.190 [2024-07-12 08:45:07.230199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.448 [2024-07-12 08:45:07.427153] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.706 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.706 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:32.706 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:32.706 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:32.965 BaseBdev1_malloc 00:18:32.965 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:33.222 true 00:18:33.222 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:33.480 [2024-07-12 08:45:08.586337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:33.480 [2024-07-12 08:45:08.586649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.480 [2024-07-12 08:45:08.586728] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:33.480 [2024-07-12 08:45:08.586852] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.480 [2024-07-12 08:45:08.589612] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.480 [2024-07-12 08:45:08.589779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.480 BaseBdev1 00:18:33.480 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:33.480 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.738 BaseBdev2_malloc 00:18:33.738 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:33.996 true 00:18:33.996 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:34.255 [2024-07-12 08:45:09.392007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:34.255 [2024-07-12 08:45:09.392445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.255 [2024-07-12 08:45:09.392660] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:34.255 [2024-07-12 08:45:09.392779] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.255 [2024-07-12 08:45:09.395439] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.255 [2024-07-12 08:45:09.395614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.255 BaseBdev2 00:18:34.255 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:34.559 [2024-07-12 08:45:09.676161] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.559 [2024-07-12 08:45:09.678586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.559 [2024-07-12 08:45:09.678980] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:34.559 [2024-07-12 08:45:09.679119] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:34.559 [2024-07-12 08:45:09.679390] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:34.559 [2024-07-12 08:45:09.679931] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:34.559 [2024-07-12 08:45:09.680053] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:34.559 [2024-07-12 08:45:09.680396] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.559 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.560 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.818 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.818 "name": "raid_bdev1", 00:18:34.818 "uuid": "60d9d03c-a8aa-4341-b6d0-12da6fb5816a", 00:18:34.818 "strip_size_kb": 0, 00:18:34.818 "state": "online", 00:18:34.818 "raid_level": "raid1", 00:18:34.818 "superblock": true, 00:18:34.818 "num_base_bdevs": 2, 00:18:34.818 "num_base_bdevs_discovered": 2, 00:18:34.818 "num_base_bdevs_operational": 2, 00:18:34.818 "base_bdevs_list": [ 00:18:34.818 { 00:18:34.818 "name": "BaseBdev1", 00:18:34.818 "uuid": "db5652de-7869-52dd-8be1-379965b8a2bf", 00:18:34.818 "is_configured": true, 00:18:34.818 "data_offset": 2048, 00:18:34.818 "data_size": 63488 00:18:34.818 }, 00:18:34.818 { 00:18:34.818 "name": "BaseBdev2", 00:18:34.818 "uuid": "93b7ce91-d5de-5f09-9138-947ee40cdb96", 00:18:34.818 "is_configured": true, 00:18:34.818 "data_offset": 2048, 00:18:34.818 "data_size": 63488 00:18:34.818 } 00:18:34.818 ] 00:18:34.818 }' 00:18:34.818 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.818 08:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.749 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:35.749 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:35.749 [2024-07-12 08:45:10.709860] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:36.682 [2024-07-12 08:45:11.844609] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:36.682 [2024-07-12 08:45:11.844961] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.682 [2024-07-12 08:45:11.845320] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.682 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.248 08:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.248 "name": "raid_bdev1", 00:18:37.248 "uuid": "60d9d03c-a8aa-4341-b6d0-12da6fb5816a", 00:18:37.248 "strip_size_kb": 0, 00:18:37.248 "state": "online", 00:18:37.248 "raid_level": "raid1", 00:18:37.248 "superblock": true, 00:18:37.248 "num_base_bdevs": 2, 00:18:37.248 "num_base_bdevs_discovered": 1, 00:18:37.248 "num_base_bdevs_operational": 1, 00:18:37.248 "base_bdevs_list": [ 00:18:37.248 { 00:18:37.248 "name": null, 00:18:37.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.248 "is_configured": false, 00:18:37.248 "data_offset": 2048, 00:18:37.248 "data_size": 63488 00:18:37.248 }, 00:18:37.248 { 00:18:37.248 "name": "BaseBdev2", 00:18:37.248 "uuid": "93b7ce91-d5de-5f09-9138-947ee40cdb96", 00:18:37.248 "is_configured": true, 00:18:37.248 "data_offset": 2048, 00:18:37.248 "data_size": 63488 00:18:37.248 } 00:18:37.248 ] 00:18:37.248 }' 00:18:37.248 08:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.248 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.813 08:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.070 [2024-07-12 08:45:13.072576] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.070 [2024-07-12 08:45:13.072785] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.070 [2024-07-12 08:45:13.075880] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.070 [2024-07-12 08:45:13.076038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.070 [2024-07-12 08:45:13.076206] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.070 [2024-07-12 08:45:13.076320] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:38.070 0 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 125753 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 125753 ']' 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 125753 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125753 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125753' 00:18:38.070 killing process with pid 125753 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 125753 00:18:38.070 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 125753 00:18:38.070 [2024-07-12 08:45:13.109156] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.070 [2024-07-12 08:45:13.219411] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.nlExLl05d5 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:39.544 00:18:39.544 real 0m7.613s 00:18:39.544 user 0m11.683s 00:18:39.544 sys 0m0.753s 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:39.544 08:45:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.544 ************************************ 00:18:39.544 END TEST raid_write_error_test 00:18:39.544 ************************************ 00:18:39.544 08:45:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:39.544 08:45:14 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:18:39.544 08:45:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:39.544 08:45:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:18:39.544 08:45:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:39.544 08:45:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.544 08:45:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.544 ************************************ 00:18:39.544 START TEST raid_state_function_test 00:18:39.544 ************************************ 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:39.544 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=125969 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 125969' 00:18:39.545 Process raid pid: 125969 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 125969 /var/tmp/spdk-raid.sock 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 125969 ']' 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:39.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.545 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.545 [2024-07-12 08:45:14.528083] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:18:39.545 [2024-07-12 08:45:14.529169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.545 [2024-07-12 08:45:14.688017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.801 [2024-07-12 08:45:14.927698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.058 [2024-07-12 08:45:15.133256] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.622 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.622 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:40.622 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:40.879 [2024-07-12 08:45:15.872978] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.879 [2024-07-12 08:45:15.873273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.879 [2024-07-12 08:45:15.873398] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.879 [2024-07-12 08:45:15.873478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.879 [2024-07-12 08:45:15.873577] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.879 [2024-07-12 08:45:15.873635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.879 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.137 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.137 "name": "Existed_Raid", 00:18:41.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.137 "strip_size_kb": 64, 00:18:41.137 "state": "configuring", 00:18:41.137 "raid_level": "raid0", 00:18:41.137 "superblock": false, 00:18:41.137 "num_base_bdevs": 3, 00:18:41.137 "num_base_bdevs_discovered": 0, 00:18:41.137 "num_base_bdevs_operational": 3, 00:18:41.137 "base_bdevs_list": [ 00:18:41.137 { 00:18:41.137 "name": "BaseBdev1", 00:18:41.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.137 "is_configured": false, 00:18:41.137 "data_offset": 0, 00:18:41.137 "data_size": 0 00:18:41.137 }, 00:18:41.137 { 00:18:41.137 "name": "BaseBdev2", 00:18:41.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.137 "is_configured": false, 00:18:41.137 "data_offset": 0, 00:18:41.137 "data_size": 0 00:18:41.137 }, 00:18:41.137 { 00:18:41.137 "name": "BaseBdev3", 00:18:41.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.137 "is_configured": false, 00:18:41.137 "data_offset": 0, 00:18:41.137 "data_size": 0 00:18:41.137 } 00:18:41.137 ] 00:18:41.137 }' 00:18:41.137 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.137 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.069 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:42.069 [2024-07-12 08:45:17.245209] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.069 [2024-07-12 08:45:17.245507] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:42.326 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:42.326 [2024-07-12 08:45:17.493259] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.326 [2024-07-12 08:45:17.493520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.326 [2024-07-12 08:45:17.493657] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.327 [2024-07-12 08:45:17.493720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.327 [2024-07-12 08:45:17.493814] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:42.327 [2024-07-12 08:45:17.493879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:42.327 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:42.892 [2024-07-12 08:45:17.822612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.892 BaseBdev1 00:18:42.892 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:42.892 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:42.892 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:42.892 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:42.892 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:42.892 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:42.892 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.151 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.409 [ 00:18:43.409 { 00:18:43.409 "name": "BaseBdev1", 00:18:43.409 "aliases": [ 00:18:43.409 "ae05fcc9-d1dd-4271-aade-c47d94be0944" 00:18:43.409 ], 00:18:43.409 "product_name": "Malloc disk", 00:18:43.409 "block_size": 512, 00:18:43.409 "num_blocks": 65536, 00:18:43.409 "uuid": "ae05fcc9-d1dd-4271-aade-c47d94be0944", 00:18:43.409 "assigned_rate_limits": { 00:18:43.409 "rw_ios_per_sec": 0, 00:18:43.409 "rw_mbytes_per_sec": 0, 00:18:43.409 "r_mbytes_per_sec": 0, 00:18:43.409 "w_mbytes_per_sec": 0 00:18:43.409 }, 00:18:43.409 "claimed": true, 00:18:43.409 "claim_type": "exclusive_write", 00:18:43.409 "zoned": false, 00:18:43.409 "supported_io_types": { 00:18:43.409 "read": true, 00:18:43.409 "write": true, 00:18:43.409 "unmap": true, 00:18:43.409 "flush": true, 00:18:43.409 "reset": true, 00:18:43.409 "nvme_admin": false, 00:18:43.409 "nvme_io": false, 00:18:43.409 "nvme_io_md": false, 00:18:43.409 "write_zeroes": true, 00:18:43.409 "zcopy": true, 00:18:43.409 "get_zone_info": false, 00:18:43.409 "zone_management": false, 00:18:43.409 "zone_append": false, 00:18:43.409 "compare": false, 00:18:43.409 "compare_and_write": false, 00:18:43.409 "abort": true, 00:18:43.409 "seek_hole": false, 00:18:43.409 "seek_data": false, 00:18:43.409 "copy": true, 00:18:43.409 "nvme_iov_md": false 00:18:43.409 }, 00:18:43.409 "memory_domains": [ 00:18:43.409 { 00:18:43.409 "dma_device_id": "system", 00:18:43.409 "dma_device_type": 1 00:18:43.409 }, 00:18:43.409 { 00:18:43.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.409 "dma_device_type": 2 00:18:43.409 } 00:18:43.409 ], 00:18:43.409 "driver_specific": {} 00:18:43.409 } 00:18:43.409 ] 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.409 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.678 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.678 "name": "Existed_Raid", 00:18:43.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.679 "strip_size_kb": 64, 00:18:43.679 "state": "configuring", 00:18:43.679 "raid_level": "raid0", 00:18:43.679 "superblock": false, 00:18:43.679 "num_base_bdevs": 3, 00:18:43.679 "num_base_bdevs_discovered": 1, 00:18:43.679 "num_base_bdevs_operational": 3, 00:18:43.679 "base_bdevs_list": [ 00:18:43.679 { 00:18:43.679 "name": "BaseBdev1", 00:18:43.679 "uuid": "ae05fcc9-d1dd-4271-aade-c47d94be0944", 00:18:43.679 "is_configured": true, 00:18:43.679 "data_offset": 0, 00:18:43.679 "data_size": 65536 00:18:43.679 }, 00:18:43.679 { 00:18:43.679 "name": "BaseBdev2", 00:18:43.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.679 "is_configured": false, 00:18:43.679 "data_offset": 0, 00:18:43.679 "data_size": 0 00:18:43.679 }, 00:18:43.679 { 00:18:43.679 "name": "BaseBdev3", 00:18:43.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.679 "is_configured": false, 00:18:43.679 "data_offset": 0, 00:18:43.679 "data_size": 0 00:18:43.679 } 00:18:43.679 ] 00:18:43.679 }' 00:18:43.679 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.679 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.271 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.529 [2024-07-12 08:45:19.651101] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.529 [2024-07-12 08:45:19.651363] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:18:44.529 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:44.787 [2024-07-12 08:45:19.963213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.787 [2024-07-12 08:45:19.965626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.787 [2024-07-12 08:45:19.965825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.787 [2024-07-12 08:45:19.965968] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:44.787 [2024-07-12 08:45:19.966060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.044 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.302 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.302 "name": "Existed_Raid", 00:18:45.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.302 "strip_size_kb": 64, 00:18:45.302 "state": "configuring", 00:18:45.302 "raid_level": "raid0", 00:18:45.302 "superblock": false, 00:18:45.302 "num_base_bdevs": 3, 00:18:45.302 "num_base_bdevs_discovered": 1, 00:18:45.302 "num_base_bdevs_operational": 3, 00:18:45.302 "base_bdevs_list": [ 00:18:45.302 { 00:18:45.302 "name": "BaseBdev1", 00:18:45.302 "uuid": "ae05fcc9-d1dd-4271-aade-c47d94be0944", 00:18:45.302 "is_configured": true, 00:18:45.302 "data_offset": 0, 00:18:45.302 "data_size": 65536 00:18:45.302 }, 00:18:45.302 { 00:18:45.302 "name": "BaseBdev2", 00:18:45.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.302 "is_configured": false, 00:18:45.302 "data_offset": 0, 00:18:45.302 "data_size": 0 00:18:45.302 }, 00:18:45.302 { 00:18:45.302 "name": "BaseBdev3", 00:18:45.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.302 "is_configured": false, 00:18:45.302 "data_offset": 0, 00:18:45.302 "data_size": 0 00:18:45.302 } 00:18:45.302 ] 00:18:45.302 }' 00:18:45.302 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.302 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.868 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:46.432 [2024-07-12 08:45:21.328350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.432 BaseBdev2 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.432 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:46.691 [ 00:18:46.691 { 00:18:46.691 "name": "BaseBdev2", 00:18:46.691 "aliases": [ 00:18:46.691 "966f89b4-b65c-4582-a2a7-0d5ed47d637e" 00:18:46.691 ], 00:18:46.691 "product_name": "Malloc disk", 00:18:46.691 "block_size": 512, 00:18:46.691 "num_blocks": 65536, 00:18:46.691 "uuid": "966f89b4-b65c-4582-a2a7-0d5ed47d637e", 00:18:46.691 "assigned_rate_limits": { 00:18:46.691 "rw_ios_per_sec": 0, 00:18:46.691 "rw_mbytes_per_sec": 0, 00:18:46.691 "r_mbytes_per_sec": 0, 00:18:46.691 "w_mbytes_per_sec": 0 00:18:46.691 }, 00:18:46.692 "claimed": true, 00:18:46.692 "claim_type": "exclusive_write", 00:18:46.692 "zoned": false, 00:18:46.692 "supported_io_types": { 00:18:46.692 "read": true, 00:18:46.692 "write": true, 00:18:46.692 "unmap": true, 00:18:46.692 "flush": true, 00:18:46.692 "reset": true, 00:18:46.692 "nvme_admin": false, 00:18:46.692 "nvme_io": false, 00:18:46.692 "nvme_io_md": false, 00:18:46.692 "write_zeroes": true, 00:18:46.692 "zcopy": true, 00:18:46.692 "get_zone_info": false, 00:18:46.692 "zone_management": false, 00:18:46.692 "zone_append": false, 00:18:46.692 "compare": false, 00:18:46.692 "compare_and_write": false, 00:18:46.692 "abort": true, 00:18:46.692 "seek_hole": false, 00:18:46.692 "seek_data": false, 00:18:46.692 "copy": true, 00:18:46.692 "nvme_iov_md": false 00:18:46.692 }, 00:18:46.692 "memory_domains": [ 00:18:46.692 { 00:18:46.692 "dma_device_id": "system", 00:18:46.692 "dma_device_type": 1 00:18:46.692 }, 00:18:46.692 { 00:18:46.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.692 "dma_device_type": 2 00:18:46.692 } 00:18:46.692 ], 00:18:46.692 "driver_specific": {} 00:18:46.692 } 00:18:46.692 ] 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.692 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.259 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.259 "name": "Existed_Raid", 00:18:47.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.259 "strip_size_kb": 64, 00:18:47.259 "state": "configuring", 00:18:47.259 "raid_level": "raid0", 00:18:47.259 "superblock": false, 00:18:47.259 "num_base_bdevs": 3, 00:18:47.259 "num_base_bdevs_discovered": 2, 00:18:47.259 "num_base_bdevs_operational": 3, 00:18:47.259 "base_bdevs_list": [ 00:18:47.259 { 00:18:47.259 "name": "BaseBdev1", 00:18:47.259 "uuid": "ae05fcc9-d1dd-4271-aade-c47d94be0944", 00:18:47.259 "is_configured": true, 00:18:47.259 "data_offset": 0, 00:18:47.259 "data_size": 65536 00:18:47.259 }, 00:18:47.259 { 00:18:47.259 "name": "BaseBdev2", 00:18:47.259 "uuid": "966f89b4-b65c-4582-a2a7-0d5ed47d637e", 00:18:47.259 "is_configured": true, 00:18:47.259 "data_offset": 0, 00:18:47.259 "data_size": 65536 00:18:47.259 }, 00:18:47.259 { 00:18:47.259 "name": "BaseBdev3", 00:18:47.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.259 "is_configured": false, 00:18:47.259 "data_offset": 0, 00:18:47.259 "data_size": 0 00:18:47.259 } 00:18:47.259 ] 00:18:47.259 }' 00:18:47.259 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.259 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.824 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:48.098 [2024-07-12 08:45:23.171478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.098 [2024-07-12 08:45:23.171731] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:48.099 [2024-07-12 08:45:23.171774] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:48.099 [2024-07-12 08:45:23.172042] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:48.099 [2024-07-12 08:45:23.172575] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:48.099 [2024-07-12 08:45:23.172704] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:48.099 [2024-07-12 08:45:23.173081] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.099 BaseBdev3 00:18:48.099 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:48.099 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:48.099 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:48.099 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:48.099 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:48.099 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:48.099 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:48.356 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:48.614 [ 00:18:48.614 { 00:18:48.614 "name": "BaseBdev3", 00:18:48.614 "aliases": [ 00:18:48.614 "8c2f3e14-556b-4c62-9845-4d4953ac97d8" 00:18:48.614 ], 00:18:48.614 "product_name": "Malloc disk", 00:18:48.614 "block_size": 512, 00:18:48.614 "num_blocks": 65536, 00:18:48.614 "uuid": "8c2f3e14-556b-4c62-9845-4d4953ac97d8", 00:18:48.614 "assigned_rate_limits": { 00:18:48.614 "rw_ios_per_sec": 0, 00:18:48.614 "rw_mbytes_per_sec": 0, 00:18:48.614 "r_mbytes_per_sec": 0, 00:18:48.614 "w_mbytes_per_sec": 0 00:18:48.614 }, 00:18:48.614 "claimed": true, 00:18:48.614 "claim_type": "exclusive_write", 00:18:48.614 "zoned": false, 00:18:48.614 "supported_io_types": { 00:18:48.614 "read": true, 00:18:48.614 "write": true, 00:18:48.614 "unmap": true, 00:18:48.614 "flush": true, 00:18:48.614 "reset": true, 00:18:48.614 "nvme_admin": false, 00:18:48.614 "nvme_io": false, 00:18:48.614 "nvme_io_md": false, 00:18:48.614 "write_zeroes": true, 00:18:48.614 "zcopy": true, 00:18:48.614 "get_zone_info": false, 00:18:48.614 "zone_management": false, 00:18:48.614 "zone_append": false, 00:18:48.614 "compare": false, 00:18:48.614 "compare_and_write": false, 00:18:48.614 "abort": true, 00:18:48.614 "seek_hole": false, 00:18:48.614 "seek_data": false, 00:18:48.614 "copy": true, 00:18:48.614 "nvme_iov_md": false 00:18:48.614 }, 00:18:48.614 "memory_domains": [ 00:18:48.615 { 00:18:48.615 "dma_device_id": "system", 00:18:48.615 "dma_device_type": 1 00:18:48.615 }, 00:18:48.615 { 00:18:48.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.615 "dma_device_type": 2 00:18:48.615 } 00:18:48.615 ], 00:18:48.615 "driver_specific": {} 00:18:48.615 } 00:18:48.615 ] 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.615 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.873 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.873 "name": "Existed_Raid", 00:18:48.873 "uuid": "3bfb5af5-bf5d-4156-b973-a2e3ad734ea9", 00:18:48.873 "strip_size_kb": 64, 00:18:48.873 "state": "online", 00:18:48.873 "raid_level": "raid0", 00:18:48.873 "superblock": false, 00:18:48.873 "num_base_bdevs": 3, 00:18:48.873 "num_base_bdevs_discovered": 3, 00:18:48.873 "num_base_bdevs_operational": 3, 00:18:48.873 "base_bdevs_list": [ 00:18:48.873 { 00:18:48.873 "name": "BaseBdev1", 00:18:48.873 "uuid": "ae05fcc9-d1dd-4271-aade-c47d94be0944", 00:18:48.873 "is_configured": true, 00:18:48.873 "data_offset": 0, 00:18:48.873 "data_size": 65536 00:18:48.873 }, 00:18:48.873 { 00:18:48.873 "name": "BaseBdev2", 00:18:48.873 "uuid": "966f89b4-b65c-4582-a2a7-0d5ed47d637e", 00:18:48.873 "is_configured": true, 00:18:48.873 "data_offset": 0, 00:18:48.873 "data_size": 65536 00:18:48.873 }, 00:18:48.873 { 00:18:48.873 "name": "BaseBdev3", 00:18:48.873 "uuid": "8c2f3e14-556b-4c62-9845-4d4953ac97d8", 00:18:48.873 "is_configured": true, 00:18:48.873 "data_offset": 0, 00:18:48.873 "data_size": 65536 00:18:48.873 } 00:18:48.873 ] 00:18:48.873 }' 00:18:48.873 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.873 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:49.807 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:50.066 [2024-07-12 08:45:25.073901] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.066 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:50.066 "name": "Existed_Raid", 00:18:50.066 "aliases": [ 00:18:50.066 "3bfb5af5-bf5d-4156-b973-a2e3ad734ea9" 00:18:50.066 ], 00:18:50.066 "product_name": "Raid Volume", 00:18:50.066 "block_size": 512, 00:18:50.066 "num_blocks": 196608, 00:18:50.066 "uuid": "3bfb5af5-bf5d-4156-b973-a2e3ad734ea9", 00:18:50.066 "assigned_rate_limits": { 00:18:50.066 "rw_ios_per_sec": 0, 00:18:50.066 "rw_mbytes_per_sec": 0, 00:18:50.066 "r_mbytes_per_sec": 0, 00:18:50.066 "w_mbytes_per_sec": 0 00:18:50.066 }, 00:18:50.066 "claimed": false, 00:18:50.066 "zoned": false, 00:18:50.066 "supported_io_types": { 00:18:50.066 "read": true, 00:18:50.066 "write": true, 00:18:50.066 "unmap": true, 00:18:50.066 "flush": true, 00:18:50.066 "reset": true, 00:18:50.066 "nvme_admin": false, 00:18:50.066 "nvme_io": false, 00:18:50.066 "nvme_io_md": false, 00:18:50.066 "write_zeroes": true, 00:18:50.066 "zcopy": false, 00:18:50.066 "get_zone_info": false, 00:18:50.066 "zone_management": false, 00:18:50.066 "zone_append": false, 00:18:50.066 "compare": false, 00:18:50.066 "compare_and_write": false, 00:18:50.066 "abort": false, 00:18:50.066 "seek_hole": false, 00:18:50.066 "seek_data": false, 00:18:50.066 "copy": false, 00:18:50.066 "nvme_iov_md": false 00:18:50.066 }, 00:18:50.066 "memory_domains": [ 00:18:50.066 { 00:18:50.066 "dma_device_id": "system", 00:18:50.066 "dma_device_type": 1 00:18:50.066 }, 00:18:50.066 { 00:18:50.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.066 "dma_device_type": 2 00:18:50.066 }, 00:18:50.066 { 00:18:50.066 "dma_device_id": "system", 00:18:50.066 "dma_device_type": 1 00:18:50.066 }, 00:18:50.066 { 00:18:50.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.066 "dma_device_type": 2 00:18:50.066 }, 00:18:50.066 { 00:18:50.066 "dma_device_id": "system", 00:18:50.066 "dma_device_type": 1 00:18:50.066 }, 00:18:50.066 { 00:18:50.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.066 "dma_device_type": 2 00:18:50.066 } 00:18:50.066 ], 00:18:50.066 "driver_specific": { 00:18:50.066 "raid": { 00:18:50.066 "uuid": "3bfb5af5-bf5d-4156-b973-a2e3ad734ea9", 00:18:50.066 "strip_size_kb": 64, 00:18:50.066 "state": "online", 00:18:50.066 "raid_level": "raid0", 00:18:50.066 "superblock": false, 00:18:50.066 "num_base_bdevs": 3, 00:18:50.066 "num_base_bdevs_discovered": 3, 00:18:50.066 "num_base_bdevs_operational": 3, 00:18:50.066 "base_bdevs_list": [ 00:18:50.066 { 00:18:50.066 "name": "BaseBdev1", 00:18:50.066 "uuid": "ae05fcc9-d1dd-4271-aade-c47d94be0944", 00:18:50.066 "is_configured": true, 00:18:50.066 "data_offset": 0, 00:18:50.066 "data_size": 65536 00:18:50.066 }, 00:18:50.066 { 00:18:50.066 "name": "BaseBdev2", 00:18:50.066 "uuid": "966f89b4-b65c-4582-a2a7-0d5ed47d637e", 00:18:50.066 "is_configured": true, 00:18:50.066 "data_offset": 0, 00:18:50.066 "data_size": 65536 00:18:50.066 }, 00:18:50.066 { 00:18:50.066 "name": "BaseBdev3", 00:18:50.066 "uuid": "8c2f3e14-556b-4c62-9845-4d4953ac97d8", 00:18:50.066 "is_configured": true, 00:18:50.066 "data_offset": 0, 00:18:50.066 "data_size": 65536 00:18:50.066 } 00:18:50.066 ] 00:18:50.066 } 00:18:50.066 } 00:18:50.066 }' 00:18:50.066 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.066 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:50.066 BaseBdev2 00:18:50.066 BaseBdev3' 00:18:50.066 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:50.066 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:50.066 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:50.325 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:50.325 "name": "BaseBdev1", 00:18:50.325 "aliases": [ 00:18:50.325 "ae05fcc9-d1dd-4271-aade-c47d94be0944" 00:18:50.325 ], 00:18:50.325 "product_name": "Malloc disk", 00:18:50.325 "block_size": 512, 00:18:50.325 "num_blocks": 65536, 00:18:50.325 "uuid": "ae05fcc9-d1dd-4271-aade-c47d94be0944", 00:18:50.325 "assigned_rate_limits": { 00:18:50.325 "rw_ios_per_sec": 0, 00:18:50.325 "rw_mbytes_per_sec": 0, 00:18:50.325 "r_mbytes_per_sec": 0, 00:18:50.325 "w_mbytes_per_sec": 0 00:18:50.325 }, 00:18:50.325 "claimed": true, 00:18:50.325 "claim_type": "exclusive_write", 00:18:50.325 "zoned": false, 00:18:50.325 "supported_io_types": { 00:18:50.325 "read": true, 00:18:50.325 "write": true, 00:18:50.325 "unmap": true, 00:18:50.325 "flush": true, 00:18:50.325 "reset": true, 00:18:50.325 "nvme_admin": false, 00:18:50.325 "nvme_io": false, 00:18:50.325 "nvme_io_md": false, 00:18:50.325 "write_zeroes": true, 00:18:50.325 "zcopy": true, 00:18:50.325 "get_zone_info": false, 00:18:50.325 "zone_management": false, 00:18:50.325 "zone_append": false, 00:18:50.325 "compare": false, 00:18:50.325 "compare_and_write": false, 00:18:50.325 "abort": true, 00:18:50.325 "seek_hole": false, 00:18:50.325 "seek_data": false, 00:18:50.325 "copy": true, 00:18:50.325 "nvme_iov_md": false 00:18:50.325 }, 00:18:50.325 "memory_domains": [ 00:18:50.325 { 00:18:50.325 "dma_device_id": "system", 00:18:50.325 "dma_device_type": 1 00:18:50.325 }, 00:18:50.325 { 00:18:50.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.325 "dma_device_type": 2 00:18:50.325 } 00:18:50.325 ], 00:18:50.325 "driver_specific": {} 00:18:50.325 }' 00:18:50.325 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.325 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.584 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:50.584 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.584 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.584 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:50.584 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.584 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.842 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.842 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.842 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.842 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:50.842 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:50.842 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:50.842 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:51.100 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.100 "name": "BaseBdev2", 00:18:51.100 "aliases": [ 00:18:51.100 "966f89b4-b65c-4582-a2a7-0d5ed47d637e" 00:18:51.100 ], 00:18:51.100 "product_name": "Malloc disk", 00:18:51.100 "block_size": 512, 00:18:51.100 "num_blocks": 65536, 00:18:51.100 "uuid": "966f89b4-b65c-4582-a2a7-0d5ed47d637e", 00:18:51.100 "assigned_rate_limits": { 00:18:51.100 "rw_ios_per_sec": 0, 00:18:51.100 "rw_mbytes_per_sec": 0, 00:18:51.100 "r_mbytes_per_sec": 0, 00:18:51.100 "w_mbytes_per_sec": 0 00:18:51.100 }, 00:18:51.100 "claimed": true, 00:18:51.100 "claim_type": "exclusive_write", 00:18:51.100 "zoned": false, 00:18:51.100 "supported_io_types": { 00:18:51.100 "read": true, 00:18:51.100 "write": true, 00:18:51.100 "unmap": true, 00:18:51.100 "flush": true, 00:18:51.100 "reset": true, 00:18:51.100 "nvme_admin": false, 00:18:51.100 "nvme_io": false, 00:18:51.100 "nvme_io_md": false, 00:18:51.100 "write_zeroes": true, 00:18:51.100 "zcopy": true, 00:18:51.100 "get_zone_info": false, 00:18:51.100 "zone_management": false, 00:18:51.100 "zone_append": false, 00:18:51.100 "compare": false, 00:18:51.100 "compare_and_write": false, 00:18:51.100 "abort": true, 00:18:51.100 "seek_hole": false, 00:18:51.100 "seek_data": false, 00:18:51.100 "copy": true, 00:18:51.100 "nvme_iov_md": false 00:18:51.100 }, 00:18:51.100 "memory_domains": [ 00:18:51.100 { 00:18:51.100 "dma_device_id": "system", 00:18:51.100 "dma_device_type": 1 00:18:51.100 }, 00:18:51.100 { 00:18:51.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.100 "dma_device_type": 2 00:18:51.100 } 00:18:51.100 ], 00:18:51.100 "driver_specific": {} 00:18:51.100 }' 00:18:51.100 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.100 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.358 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.358 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.358 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.358 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.358 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.358 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.616 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.616 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.616 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.616 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.616 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.616 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:51.616 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.879 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.879 "name": "BaseBdev3", 00:18:51.879 "aliases": [ 00:18:51.879 "8c2f3e14-556b-4c62-9845-4d4953ac97d8" 00:18:51.879 ], 00:18:51.879 "product_name": "Malloc disk", 00:18:51.879 "block_size": 512, 00:18:51.879 "num_blocks": 65536, 00:18:51.879 "uuid": "8c2f3e14-556b-4c62-9845-4d4953ac97d8", 00:18:51.879 "assigned_rate_limits": { 00:18:51.879 "rw_ios_per_sec": 0, 00:18:51.879 "rw_mbytes_per_sec": 0, 00:18:51.879 "r_mbytes_per_sec": 0, 00:18:51.879 "w_mbytes_per_sec": 0 00:18:51.879 }, 00:18:51.879 "claimed": true, 00:18:51.879 "claim_type": "exclusive_write", 00:18:51.879 "zoned": false, 00:18:51.879 "supported_io_types": { 00:18:51.879 "read": true, 00:18:51.879 "write": true, 00:18:51.879 "unmap": true, 00:18:51.879 "flush": true, 00:18:51.879 "reset": true, 00:18:51.879 "nvme_admin": false, 00:18:51.879 "nvme_io": false, 00:18:51.879 "nvme_io_md": false, 00:18:51.879 "write_zeroes": true, 00:18:51.879 "zcopy": true, 00:18:51.879 "get_zone_info": false, 00:18:51.879 "zone_management": false, 00:18:51.879 "zone_append": false, 00:18:51.879 "compare": false, 00:18:51.879 "compare_and_write": false, 00:18:51.879 "abort": true, 00:18:51.879 "seek_hole": false, 00:18:51.879 "seek_data": false, 00:18:51.879 "copy": true, 00:18:51.879 "nvme_iov_md": false 00:18:51.879 }, 00:18:51.879 "memory_domains": [ 00:18:51.879 { 00:18:51.879 "dma_device_id": "system", 00:18:51.879 "dma_device_type": 1 00:18:51.879 }, 00:18:51.879 { 00:18:51.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.879 "dma_device_type": 2 00:18:51.879 } 00:18:51.879 ], 00:18:51.879 "driver_specific": {} 00:18:51.879 }' 00:18:51.879 08:45:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.879 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.138 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.138 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.138 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.138 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.138 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.138 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.395 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.395 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.395 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.395 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.395 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:52.653 [2024-07-12 08:45:27.807172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:52.653 [2024-07-12 08:45:27.807407] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.653 [2024-07-12 08:45:27.807614] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.910 08:45:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.168 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.168 "name": "Existed_Raid", 00:18:53.168 "uuid": "3bfb5af5-bf5d-4156-b973-a2e3ad734ea9", 00:18:53.168 "strip_size_kb": 64, 00:18:53.168 "state": "offline", 00:18:53.168 "raid_level": "raid0", 00:18:53.168 "superblock": false, 00:18:53.168 "num_base_bdevs": 3, 00:18:53.168 "num_base_bdevs_discovered": 2, 00:18:53.168 "num_base_bdevs_operational": 2, 00:18:53.168 "base_bdevs_list": [ 00:18:53.168 { 00:18:53.168 "name": null, 00:18:53.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.168 "is_configured": false, 00:18:53.168 "data_offset": 0, 00:18:53.168 "data_size": 65536 00:18:53.168 }, 00:18:53.168 { 00:18:53.168 "name": "BaseBdev2", 00:18:53.168 "uuid": "966f89b4-b65c-4582-a2a7-0d5ed47d637e", 00:18:53.168 "is_configured": true, 00:18:53.168 "data_offset": 0, 00:18:53.168 "data_size": 65536 00:18:53.168 }, 00:18:53.168 { 00:18:53.168 "name": "BaseBdev3", 00:18:53.168 "uuid": "8c2f3e14-556b-4c62-9845-4d4953ac97d8", 00:18:53.168 "is_configured": true, 00:18:53.168 "data_offset": 0, 00:18:53.168 "data_size": 65536 00:18:53.168 } 00:18:53.168 ] 00:18:53.168 }' 00:18:53.168 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.168 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.100 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:54.100 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:54.100 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.100 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:54.100 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:54.100 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.100 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:54.357 [2024-07-12 08:45:29.472081] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:54.615 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:54.615 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:54.615 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.615 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:54.873 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:54.873 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.873 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:55.131 [2024-07-12 08:45:30.094589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:55.131 [2024-07-12 08:45:30.094883] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:55.131 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:55.131 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:55.131 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.131 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:55.389 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:55.389 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:55.389 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:55.389 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:55.389 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:55.389 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:55.647 BaseBdev2 00:18:55.647 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:55.647 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:55.647 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:55.647 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:55.647 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:55.647 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:55.647 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.904 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:56.180 [ 00:18:56.180 { 00:18:56.180 "name": "BaseBdev2", 00:18:56.180 "aliases": [ 00:18:56.180 "141d944b-3b5e-4578-b9a1-7718049d8889" 00:18:56.180 ], 00:18:56.180 "product_name": "Malloc disk", 00:18:56.180 "block_size": 512, 00:18:56.180 "num_blocks": 65536, 00:18:56.180 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:18:56.180 "assigned_rate_limits": { 00:18:56.180 "rw_ios_per_sec": 0, 00:18:56.180 "rw_mbytes_per_sec": 0, 00:18:56.180 "r_mbytes_per_sec": 0, 00:18:56.180 "w_mbytes_per_sec": 0 00:18:56.180 }, 00:18:56.180 "claimed": false, 00:18:56.181 "zoned": false, 00:18:56.181 "supported_io_types": { 00:18:56.181 "read": true, 00:18:56.181 "write": true, 00:18:56.181 "unmap": true, 00:18:56.181 "flush": true, 00:18:56.181 "reset": true, 00:18:56.181 "nvme_admin": false, 00:18:56.181 "nvme_io": false, 00:18:56.181 "nvme_io_md": false, 00:18:56.181 "write_zeroes": true, 00:18:56.181 "zcopy": true, 00:18:56.181 "get_zone_info": false, 00:18:56.181 "zone_management": false, 00:18:56.181 "zone_append": false, 00:18:56.181 "compare": false, 00:18:56.181 "compare_and_write": false, 00:18:56.181 "abort": true, 00:18:56.181 "seek_hole": false, 00:18:56.181 "seek_data": false, 00:18:56.181 "copy": true, 00:18:56.181 "nvme_iov_md": false 00:18:56.181 }, 00:18:56.181 "memory_domains": [ 00:18:56.181 { 00:18:56.181 "dma_device_id": "system", 00:18:56.181 "dma_device_type": 1 00:18:56.181 }, 00:18:56.181 { 00:18:56.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.181 "dma_device_type": 2 00:18:56.181 } 00:18:56.181 ], 00:18:56.181 "driver_specific": {} 00:18:56.181 } 00:18:56.181 ] 00:18:56.181 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:56.181 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:56.181 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:56.181 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:56.454 BaseBdev3 00:18:56.454 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:56.454 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:56.454 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:56.454 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:56.454 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:56.454 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:56.454 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:56.712 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:56.971 [ 00:18:56.971 { 00:18:56.971 "name": "BaseBdev3", 00:18:56.971 "aliases": [ 00:18:56.971 "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2" 00:18:56.971 ], 00:18:56.971 "product_name": "Malloc disk", 00:18:56.971 "block_size": 512, 00:18:56.971 "num_blocks": 65536, 00:18:56.971 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:18:56.971 "assigned_rate_limits": { 00:18:56.971 "rw_ios_per_sec": 0, 00:18:56.971 "rw_mbytes_per_sec": 0, 00:18:56.971 "r_mbytes_per_sec": 0, 00:18:56.971 "w_mbytes_per_sec": 0 00:18:56.971 }, 00:18:56.971 "claimed": false, 00:18:56.971 "zoned": false, 00:18:56.971 "supported_io_types": { 00:18:56.971 "read": true, 00:18:56.971 "write": true, 00:18:56.971 "unmap": true, 00:18:56.971 "flush": true, 00:18:56.971 "reset": true, 00:18:56.971 "nvme_admin": false, 00:18:56.971 "nvme_io": false, 00:18:56.971 "nvme_io_md": false, 00:18:56.971 "write_zeroes": true, 00:18:56.971 "zcopy": true, 00:18:56.971 "get_zone_info": false, 00:18:56.971 "zone_management": false, 00:18:56.971 "zone_append": false, 00:18:56.971 "compare": false, 00:18:56.971 "compare_and_write": false, 00:18:56.971 "abort": true, 00:18:56.971 "seek_hole": false, 00:18:56.971 "seek_data": false, 00:18:56.971 "copy": true, 00:18:56.971 "nvme_iov_md": false 00:18:56.971 }, 00:18:56.971 "memory_domains": [ 00:18:56.971 { 00:18:56.971 "dma_device_id": "system", 00:18:56.971 "dma_device_type": 1 00:18:56.971 }, 00:18:56.971 { 00:18:56.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.971 "dma_device_type": 2 00:18:56.971 } 00:18:56.971 ], 00:18:56.971 "driver_specific": {} 00:18:56.971 } 00:18:56.971 ] 00:18:56.971 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:56.971 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:56.971 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:56.971 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:57.230 [2024-07-12 08:45:32.209011] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.230 [2024-07-12 08:45:32.209287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.230 [2024-07-12 08:45:32.209448] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.230 [2024-07-12 08:45:32.211646] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.230 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.488 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.488 "name": "Existed_Raid", 00:18:57.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.488 "strip_size_kb": 64, 00:18:57.488 "state": "configuring", 00:18:57.488 "raid_level": "raid0", 00:18:57.488 "superblock": false, 00:18:57.488 "num_base_bdevs": 3, 00:18:57.488 "num_base_bdevs_discovered": 2, 00:18:57.488 "num_base_bdevs_operational": 3, 00:18:57.488 "base_bdevs_list": [ 00:18:57.488 { 00:18:57.488 "name": "BaseBdev1", 00:18:57.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.488 "is_configured": false, 00:18:57.488 "data_offset": 0, 00:18:57.488 "data_size": 0 00:18:57.488 }, 00:18:57.488 { 00:18:57.488 "name": "BaseBdev2", 00:18:57.488 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:18:57.488 "is_configured": true, 00:18:57.488 "data_offset": 0, 00:18:57.488 "data_size": 65536 00:18:57.488 }, 00:18:57.488 { 00:18:57.488 "name": "BaseBdev3", 00:18:57.488 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:18:57.488 "is_configured": true, 00:18:57.488 "data_offset": 0, 00:18:57.488 "data_size": 65536 00:18:57.488 } 00:18:57.488 ] 00:18:57.488 }' 00:18:57.488 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.488 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.054 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:58.311 [2024-07-12 08:45:33.425279] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.311 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.568 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.568 "name": "Existed_Raid", 00:18:58.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.568 "strip_size_kb": 64, 00:18:58.568 "state": "configuring", 00:18:58.568 "raid_level": "raid0", 00:18:58.568 "superblock": false, 00:18:58.568 "num_base_bdevs": 3, 00:18:58.568 "num_base_bdevs_discovered": 1, 00:18:58.568 "num_base_bdevs_operational": 3, 00:18:58.568 "base_bdevs_list": [ 00:18:58.568 { 00:18:58.568 "name": "BaseBdev1", 00:18:58.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.568 "is_configured": false, 00:18:58.568 "data_offset": 0, 00:18:58.568 "data_size": 0 00:18:58.568 }, 00:18:58.568 { 00:18:58.568 "name": null, 00:18:58.568 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:18:58.568 "is_configured": false, 00:18:58.568 "data_offset": 0, 00:18:58.568 "data_size": 65536 00:18:58.568 }, 00:18:58.568 { 00:18:58.568 "name": "BaseBdev3", 00:18:58.568 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:18:58.568 "is_configured": true, 00:18:58.568 "data_offset": 0, 00:18:58.568 "data_size": 65536 00:18:58.568 } 00:18:58.568 ] 00:18:58.568 }' 00:18:58.568 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.568 08:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.502 08:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.502 08:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:59.502 08:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:59.502 08:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.761 [2024-07-12 08:45:34.904612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.761 BaseBdev1 00:18:59.761 08:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:59.761 08:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:59.761 08:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:59.761 08:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:59.761 08:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:59.761 08:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:59.761 08:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.018 08:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.275 [ 00:19:00.275 { 00:19:00.275 "name": "BaseBdev1", 00:19:00.275 "aliases": [ 00:19:00.275 "eb4b1950-a088-40a0-8fb0-1720bb28a5d8" 00:19:00.275 ], 00:19:00.275 "product_name": "Malloc disk", 00:19:00.275 "block_size": 512, 00:19:00.275 "num_blocks": 65536, 00:19:00.275 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:00.275 "assigned_rate_limits": { 00:19:00.275 "rw_ios_per_sec": 0, 00:19:00.275 "rw_mbytes_per_sec": 0, 00:19:00.275 "r_mbytes_per_sec": 0, 00:19:00.275 "w_mbytes_per_sec": 0 00:19:00.275 }, 00:19:00.275 "claimed": true, 00:19:00.275 "claim_type": "exclusive_write", 00:19:00.275 "zoned": false, 00:19:00.275 "supported_io_types": { 00:19:00.275 "read": true, 00:19:00.275 "write": true, 00:19:00.275 "unmap": true, 00:19:00.275 "flush": true, 00:19:00.275 "reset": true, 00:19:00.275 "nvme_admin": false, 00:19:00.275 "nvme_io": false, 00:19:00.275 "nvme_io_md": false, 00:19:00.275 "write_zeroes": true, 00:19:00.275 "zcopy": true, 00:19:00.275 "get_zone_info": false, 00:19:00.275 "zone_management": false, 00:19:00.275 "zone_append": false, 00:19:00.275 "compare": false, 00:19:00.275 "compare_and_write": false, 00:19:00.275 "abort": true, 00:19:00.275 "seek_hole": false, 00:19:00.275 "seek_data": false, 00:19:00.275 "copy": true, 00:19:00.275 "nvme_iov_md": false 00:19:00.275 }, 00:19:00.275 "memory_domains": [ 00:19:00.275 { 00:19:00.275 "dma_device_id": "system", 00:19:00.275 "dma_device_type": 1 00:19:00.275 }, 00:19:00.275 { 00:19:00.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.275 "dma_device_type": 2 00:19:00.275 } 00:19:00.275 ], 00:19:00.275 "driver_specific": {} 00:19:00.275 } 00:19:00.275 ] 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.275 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.532 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.532 "name": "Existed_Raid", 00:19:00.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.532 "strip_size_kb": 64, 00:19:00.532 "state": "configuring", 00:19:00.532 "raid_level": "raid0", 00:19:00.532 "superblock": false, 00:19:00.532 "num_base_bdevs": 3, 00:19:00.532 "num_base_bdevs_discovered": 2, 00:19:00.532 "num_base_bdevs_operational": 3, 00:19:00.532 "base_bdevs_list": [ 00:19:00.532 { 00:19:00.532 "name": "BaseBdev1", 00:19:00.532 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:00.532 "is_configured": true, 00:19:00.532 "data_offset": 0, 00:19:00.532 "data_size": 65536 00:19:00.532 }, 00:19:00.532 { 00:19:00.532 "name": null, 00:19:00.532 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:00.532 "is_configured": false, 00:19:00.532 "data_offset": 0, 00:19:00.532 "data_size": 65536 00:19:00.532 }, 00:19:00.532 { 00:19:00.532 "name": "BaseBdev3", 00:19:00.532 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:00.532 "is_configured": true, 00:19:00.532 "data_offset": 0, 00:19:00.532 "data_size": 65536 00:19:00.532 } 00:19:00.532 ] 00:19:00.532 }' 00:19:00.532 08:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.532 08:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.464 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.464 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:01.722 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:01.722 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:01.722 [2024-07-12 08:45:36.901229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.980 08:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.238 08:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.238 "name": "Existed_Raid", 00:19:02.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.238 "strip_size_kb": 64, 00:19:02.238 "state": "configuring", 00:19:02.238 "raid_level": "raid0", 00:19:02.238 "superblock": false, 00:19:02.238 "num_base_bdevs": 3, 00:19:02.238 "num_base_bdevs_discovered": 1, 00:19:02.238 "num_base_bdevs_operational": 3, 00:19:02.238 "base_bdevs_list": [ 00:19:02.238 { 00:19:02.238 "name": "BaseBdev1", 00:19:02.238 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:02.238 "is_configured": true, 00:19:02.238 "data_offset": 0, 00:19:02.238 "data_size": 65536 00:19:02.238 }, 00:19:02.238 { 00:19:02.238 "name": null, 00:19:02.238 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:02.238 "is_configured": false, 00:19:02.238 "data_offset": 0, 00:19:02.238 "data_size": 65536 00:19:02.238 }, 00:19:02.238 { 00:19:02.238 "name": null, 00:19:02.238 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:02.238 "is_configured": false, 00:19:02.238 "data_offset": 0, 00:19:02.238 "data_size": 65536 00:19:02.238 } 00:19:02.238 ] 00:19:02.238 }' 00:19:02.238 08:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.238 08:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.804 08:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.804 08:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:03.062 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:03.062 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:03.320 [2024-07-12 08:45:38.329546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.320 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.579 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.579 "name": "Existed_Raid", 00:19:03.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.579 "strip_size_kb": 64, 00:19:03.579 "state": "configuring", 00:19:03.579 "raid_level": "raid0", 00:19:03.579 "superblock": false, 00:19:03.579 "num_base_bdevs": 3, 00:19:03.579 "num_base_bdevs_discovered": 2, 00:19:03.579 "num_base_bdevs_operational": 3, 00:19:03.579 "base_bdevs_list": [ 00:19:03.579 { 00:19:03.579 "name": "BaseBdev1", 00:19:03.579 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:03.579 "is_configured": true, 00:19:03.579 "data_offset": 0, 00:19:03.579 "data_size": 65536 00:19:03.579 }, 00:19:03.579 { 00:19:03.579 "name": null, 00:19:03.579 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:03.579 "is_configured": false, 00:19:03.579 "data_offset": 0, 00:19:03.579 "data_size": 65536 00:19:03.579 }, 00:19:03.579 { 00:19:03.579 "name": "BaseBdev3", 00:19:03.579 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:03.579 "is_configured": true, 00:19:03.579 "data_offset": 0, 00:19:03.579 "data_size": 65536 00:19:03.579 } 00:19:03.579 ] 00:19:03.579 }' 00:19:03.579 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.579 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.158 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.158 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:04.725 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:04.725 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:04.725 [2024-07-12 08:45:39.865955] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.982 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.240 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.240 "name": "Existed_Raid", 00:19:05.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.240 "strip_size_kb": 64, 00:19:05.240 "state": "configuring", 00:19:05.240 "raid_level": "raid0", 00:19:05.240 "superblock": false, 00:19:05.240 "num_base_bdevs": 3, 00:19:05.240 "num_base_bdevs_discovered": 1, 00:19:05.240 "num_base_bdevs_operational": 3, 00:19:05.240 "base_bdevs_list": [ 00:19:05.240 { 00:19:05.240 "name": null, 00:19:05.240 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:05.240 "is_configured": false, 00:19:05.240 "data_offset": 0, 00:19:05.240 "data_size": 65536 00:19:05.240 }, 00:19:05.240 { 00:19:05.240 "name": null, 00:19:05.240 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:05.240 "is_configured": false, 00:19:05.240 "data_offset": 0, 00:19:05.240 "data_size": 65536 00:19:05.240 }, 00:19:05.240 { 00:19:05.240 "name": "BaseBdev3", 00:19:05.240 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:05.240 "is_configured": true, 00:19:05.240 "data_offset": 0, 00:19:05.240 "data_size": 65536 00:19:05.240 } 00:19:05.240 ] 00:19:05.240 }' 00:19:05.240 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.240 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.806 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.806 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:06.063 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:06.063 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:06.321 [2024-07-12 08:45:41.451373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.578 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.578 "name": "Existed_Raid", 00:19:06.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.578 "strip_size_kb": 64, 00:19:06.578 "state": "configuring", 00:19:06.578 "raid_level": "raid0", 00:19:06.578 "superblock": false, 00:19:06.578 "num_base_bdevs": 3, 00:19:06.578 "num_base_bdevs_discovered": 2, 00:19:06.578 "num_base_bdevs_operational": 3, 00:19:06.578 "base_bdevs_list": [ 00:19:06.578 { 00:19:06.578 "name": null, 00:19:06.578 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:06.578 "is_configured": false, 00:19:06.578 "data_offset": 0, 00:19:06.578 "data_size": 65536 00:19:06.578 }, 00:19:06.578 { 00:19:06.578 "name": "BaseBdev2", 00:19:06.578 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:06.578 "is_configured": true, 00:19:06.578 "data_offset": 0, 00:19:06.578 "data_size": 65536 00:19:06.578 }, 00:19:06.578 { 00:19:06.578 "name": "BaseBdev3", 00:19:06.578 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:06.578 "is_configured": true, 00:19:06.578 "data_offset": 0, 00:19:06.578 "data_size": 65536 00:19:06.578 } 00:19:06.578 ] 00:19:06.578 }' 00:19:06.578 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.578 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.512 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.512 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:07.512 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:07.512 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.512 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:07.770 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u eb4b1950-a088-40a0-8fb0-1720bb28a5d8 00:19:08.050 [2024-07-12 08:45:43.166909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:08.050 [2024-07-12 08:45:43.167212] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:08.050 [2024-07-12 08:45:43.167255] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:08.050 [2024-07-12 08:45:43.167490] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:08.050 [2024-07-12 08:45:43.167939] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:08.050 [2024-07-12 08:45:43.168060] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:19:08.050 [2024-07-12 08:45:43.168430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.050 NewBaseBdev 00:19:08.050 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:08.050 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:08.050 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:08.050 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:08.050 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:08.050 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:08.050 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.322 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:08.579 [ 00:19:08.579 { 00:19:08.579 "name": "NewBaseBdev", 00:19:08.579 "aliases": [ 00:19:08.579 "eb4b1950-a088-40a0-8fb0-1720bb28a5d8" 00:19:08.579 ], 00:19:08.579 "product_name": "Malloc disk", 00:19:08.579 "block_size": 512, 00:19:08.579 "num_blocks": 65536, 00:19:08.579 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:08.579 "assigned_rate_limits": { 00:19:08.579 "rw_ios_per_sec": 0, 00:19:08.579 "rw_mbytes_per_sec": 0, 00:19:08.579 "r_mbytes_per_sec": 0, 00:19:08.579 "w_mbytes_per_sec": 0 00:19:08.579 }, 00:19:08.579 "claimed": true, 00:19:08.579 "claim_type": "exclusive_write", 00:19:08.579 "zoned": false, 00:19:08.579 "supported_io_types": { 00:19:08.579 "read": true, 00:19:08.579 "write": true, 00:19:08.579 "unmap": true, 00:19:08.579 "flush": true, 00:19:08.579 "reset": true, 00:19:08.579 "nvme_admin": false, 00:19:08.579 "nvme_io": false, 00:19:08.579 "nvme_io_md": false, 00:19:08.579 "write_zeroes": true, 00:19:08.579 "zcopy": true, 00:19:08.579 "get_zone_info": false, 00:19:08.579 "zone_management": false, 00:19:08.579 "zone_append": false, 00:19:08.579 "compare": false, 00:19:08.579 "compare_and_write": false, 00:19:08.579 "abort": true, 00:19:08.579 "seek_hole": false, 00:19:08.579 "seek_data": false, 00:19:08.579 "copy": true, 00:19:08.579 "nvme_iov_md": false 00:19:08.579 }, 00:19:08.579 "memory_domains": [ 00:19:08.579 { 00:19:08.579 "dma_device_id": "system", 00:19:08.579 "dma_device_type": 1 00:19:08.579 }, 00:19:08.579 { 00:19:08.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.579 "dma_device_type": 2 00:19:08.579 } 00:19:08.579 ], 00:19:08.579 "driver_specific": {} 00:19:08.579 } 00:19:08.579 ] 00:19:08.579 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:08.579 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:08.579 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.579 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.579 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:08.579 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.580 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.580 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.580 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.580 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.580 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.580 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.580 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.837 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.837 "name": "Existed_Raid", 00:19:08.837 "uuid": "bdd673f8-e572-498f-8163-4b7548f85546", 00:19:08.837 "strip_size_kb": 64, 00:19:08.837 "state": "online", 00:19:08.837 "raid_level": "raid0", 00:19:08.837 "superblock": false, 00:19:08.837 "num_base_bdevs": 3, 00:19:08.837 "num_base_bdevs_discovered": 3, 00:19:08.837 "num_base_bdevs_operational": 3, 00:19:08.837 "base_bdevs_list": [ 00:19:08.837 { 00:19:08.837 "name": "NewBaseBdev", 00:19:08.837 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:08.837 "is_configured": true, 00:19:08.837 "data_offset": 0, 00:19:08.837 "data_size": 65536 00:19:08.837 }, 00:19:08.837 { 00:19:08.837 "name": "BaseBdev2", 00:19:08.837 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:08.837 "is_configured": true, 00:19:08.837 "data_offset": 0, 00:19:08.837 "data_size": 65536 00:19:08.837 }, 00:19:08.837 { 00:19:08.837 "name": "BaseBdev3", 00:19:08.837 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:08.837 "is_configured": true, 00:19:08.837 "data_offset": 0, 00:19:08.837 "data_size": 65536 00:19:08.837 } 00:19:08.837 ] 00:19:08.837 }' 00:19:08.837 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.837 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:09.771 [2024-07-12 08:45:44.903802] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.771 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:09.771 "name": "Existed_Raid", 00:19:09.771 "aliases": [ 00:19:09.771 "bdd673f8-e572-498f-8163-4b7548f85546" 00:19:09.771 ], 00:19:09.771 "product_name": "Raid Volume", 00:19:09.771 "block_size": 512, 00:19:09.771 "num_blocks": 196608, 00:19:09.771 "uuid": "bdd673f8-e572-498f-8163-4b7548f85546", 00:19:09.771 "assigned_rate_limits": { 00:19:09.771 "rw_ios_per_sec": 0, 00:19:09.771 "rw_mbytes_per_sec": 0, 00:19:09.771 "r_mbytes_per_sec": 0, 00:19:09.771 "w_mbytes_per_sec": 0 00:19:09.771 }, 00:19:09.771 "claimed": false, 00:19:09.771 "zoned": false, 00:19:09.771 "supported_io_types": { 00:19:09.771 "read": true, 00:19:09.771 "write": true, 00:19:09.771 "unmap": true, 00:19:09.771 "flush": true, 00:19:09.771 "reset": true, 00:19:09.771 "nvme_admin": false, 00:19:09.771 "nvme_io": false, 00:19:09.771 "nvme_io_md": false, 00:19:09.771 "write_zeroes": true, 00:19:09.771 "zcopy": false, 00:19:09.771 "get_zone_info": false, 00:19:09.771 "zone_management": false, 00:19:09.771 "zone_append": false, 00:19:09.771 "compare": false, 00:19:09.771 "compare_and_write": false, 00:19:09.771 "abort": false, 00:19:09.771 "seek_hole": false, 00:19:09.771 "seek_data": false, 00:19:09.771 "copy": false, 00:19:09.771 "nvme_iov_md": false 00:19:09.771 }, 00:19:09.771 "memory_domains": [ 00:19:09.771 { 00:19:09.771 "dma_device_id": "system", 00:19:09.771 "dma_device_type": 1 00:19:09.771 }, 00:19:09.771 { 00:19:09.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.771 "dma_device_type": 2 00:19:09.771 }, 00:19:09.771 { 00:19:09.771 "dma_device_id": "system", 00:19:09.771 "dma_device_type": 1 00:19:09.771 }, 00:19:09.771 { 00:19:09.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.771 "dma_device_type": 2 00:19:09.771 }, 00:19:09.771 { 00:19:09.771 "dma_device_id": "system", 00:19:09.771 "dma_device_type": 1 00:19:09.771 }, 00:19:09.771 { 00:19:09.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.771 "dma_device_type": 2 00:19:09.771 } 00:19:09.771 ], 00:19:09.771 "driver_specific": { 00:19:09.771 "raid": { 00:19:09.771 "uuid": "bdd673f8-e572-498f-8163-4b7548f85546", 00:19:09.771 "strip_size_kb": 64, 00:19:09.771 "state": "online", 00:19:09.771 "raid_level": "raid0", 00:19:09.771 "superblock": false, 00:19:09.771 "num_base_bdevs": 3, 00:19:09.771 "num_base_bdevs_discovered": 3, 00:19:09.771 "num_base_bdevs_operational": 3, 00:19:09.771 "base_bdevs_list": [ 00:19:09.771 { 00:19:09.771 "name": "NewBaseBdev", 00:19:09.771 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:09.771 "is_configured": true, 00:19:09.771 "data_offset": 0, 00:19:09.771 "data_size": 65536 00:19:09.771 }, 00:19:09.771 { 00:19:09.771 "name": "BaseBdev2", 00:19:09.771 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:09.771 "is_configured": true, 00:19:09.771 "data_offset": 0, 00:19:09.771 "data_size": 65536 00:19:09.771 }, 00:19:09.771 { 00:19:09.771 "name": "BaseBdev3", 00:19:09.771 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:09.771 "is_configured": true, 00:19:09.771 "data_offset": 0, 00:19:09.771 "data_size": 65536 00:19:09.771 } 00:19:09.771 ] 00:19:09.771 } 00:19:09.771 } 00:19:09.772 }' 00:19:09.772 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:10.030 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:10.030 BaseBdev2 00:19:10.030 BaseBdev3' 00:19:10.030 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.030 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:10.030 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:10.288 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:10.288 "name": "NewBaseBdev", 00:19:10.288 "aliases": [ 00:19:10.288 "eb4b1950-a088-40a0-8fb0-1720bb28a5d8" 00:19:10.288 ], 00:19:10.288 "product_name": "Malloc disk", 00:19:10.288 "block_size": 512, 00:19:10.288 "num_blocks": 65536, 00:19:10.288 "uuid": "eb4b1950-a088-40a0-8fb0-1720bb28a5d8", 00:19:10.288 "assigned_rate_limits": { 00:19:10.288 "rw_ios_per_sec": 0, 00:19:10.288 "rw_mbytes_per_sec": 0, 00:19:10.288 "r_mbytes_per_sec": 0, 00:19:10.288 "w_mbytes_per_sec": 0 00:19:10.288 }, 00:19:10.288 "claimed": true, 00:19:10.288 "claim_type": "exclusive_write", 00:19:10.288 "zoned": false, 00:19:10.288 "supported_io_types": { 00:19:10.288 "read": true, 00:19:10.288 "write": true, 00:19:10.288 "unmap": true, 00:19:10.288 "flush": true, 00:19:10.288 "reset": true, 00:19:10.288 "nvme_admin": false, 00:19:10.288 "nvme_io": false, 00:19:10.288 "nvme_io_md": false, 00:19:10.288 "write_zeroes": true, 00:19:10.288 "zcopy": true, 00:19:10.288 "get_zone_info": false, 00:19:10.288 "zone_management": false, 00:19:10.288 "zone_append": false, 00:19:10.288 "compare": false, 00:19:10.288 "compare_and_write": false, 00:19:10.288 "abort": true, 00:19:10.288 "seek_hole": false, 00:19:10.288 "seek_data": false, 00:19:10.288 "copy": true, 00:19:10.288 "nvme_iov_md": false 00:19:10.288 }, 00:19:10.288 "memory_domains": [ 00:19:10.288 { 00:19:10.288 "dma_device_id": "system", 00:19:10.288 "dma_device_type": 1 00:19:10.288 }, 00:19:10.288 { 00:19:10.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.288 "dma_device_type": 2 00:19:10.288 } 00:19:10.288 ], 00:19:10.288 "driver_specific": {} 00:19:10.288 }' 00:19:10.288 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.288 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.289 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.289 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.547 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.547 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.547 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.547 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.547 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.547 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.547 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.805 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.805 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.805 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:10.805 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:11.063 "name": "BaseBdev2", 00:19:11.063 "aliases": [ 00:19:11.063 "141d944b-3b5e-4578-b9a1-7718049d8889" 00:19:11.063 ], 00:19:11.063 "product_name": "Malloc disk", 00:19:11.063 "block_size": 512, 00:19:11.063 "num_blocks": 65536, 00:19:11.063 "uuid": "141d944b-3b5e-4578-b9a1-7718049d8889", 00:19:11.063 "assigned_rate_limits": { 00:19:11.063 "rw_ios_per_sec": 0, 00:19:11.063 "rw_mbytes_per_sec": 0, 00:19:11.063 "r_mbytes_per_sec": 0, 00:19:11.063 "w_mbytes_per_sec": 0 00:19:11.063 }, 00:19:11.063 "claimed": true, 00:19:11.063 "claim_type": "exclusive_write", 00:19:11.063 "zoned": false, 00:19:11.063 "supported_io_types": { 00:19:11.063 "read": true, 00:19:11.063 "write": true, 00:19:11.063 "unmap": true, 00:19:11.063 "flush": true, 00:19:11.063 "reset": true, 00:19:11.063 "nvme_admin": false, 00:19:11.063 "nvme_io": false, 00:19:11.063 "nvme_io_md": false, 00:19:11.063 "write_zeroes": true, 00:19:11.063 "zcopy": true, 00:19:11.063 "get_zone_info": false, 00:19:11.063 "zone_management": false, 00:19:11.063 "zone_append": false, 00:19:11.063 "compare": false, 00:19:11.063 "compare_and_write": false, 00:19:11.063 "abort": true, 00:19:11.063 "seek_hole": false, 00:19:11.063 "seek_data": false, 00:19:11.063 "copy": true, 00:19:11.063 "nvme_iov_md": false 00:19:11.063 }, 00:19:11.063 "memory_domains": [ 00:19:11.063 { 00:19:11.063 "dma_device_id": "system", 00:19:11.063 "dma_device_type": 1 00:19:11.063 }, 00:19:11.063 { 00:19:11.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.063 "dma_device_type": 2 00:19:11.063 } 00:19:11.063 ], 00:19:11.063 "driver_specific": {} 00:19:11.063 }' 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:11.063 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:11.321 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:11.579 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:11.579 "name": "BaseBdev3", 00:19:11.579 "aliases": [ 00:19:11.579 "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2" 00:19:11.579 ], 00:19:11.579 "product_name": "Malloc disk", 00:19:11.579 "block_size": 512, 00:19:11.579 "num_blocks": 65536, 00:19:11.579 "uuid": "e962fdde-b8ab-4a6c-b6b6-dc5945cc3ca2", 00:19:11.579 "assigned_rate_limits": { 00:19:11.579 "rw_ios_per_sec": 0, 00:19:11.579 "rw_mbytes_per_sec": 0, 00:19:11.579 "r_mbytes_per_sec": 0, 00:19:11.579 "w_mbytes_per_sec": 0 00:19:11.579 }, 00:19:11.579 "claimed": true, 00:19:11.579 "claim_type": "exclusive_write", 00:19:11.579 "zoned": false, 00:19:11.579 "supported_io_types": { 00:19:11.579 "read": true, 00:19:11.579 "write": true, 00:19:11.579 "unmap": true, 00:19:11.579 "flush": true, 00:19:11.579 "reset": true, 00:19:11.579 "nvme_admin": false, 00:19:11.579 "nvme_io": false, 00:19:11.579 "nvme_io_md": false, 00:19:11.579 "write_zeroes": true, 00:19:11.579 "zcopy": true, 00:19:11.579 "get_zone_info": false, 00:19:11.579 "zone_management": false, 00:19:11.579 "zone_append": false, 00:19:11.579 "compare": false, 00:19:11.579 "compare_and_write": false, 00:19:11.579 "abort": true, 00:19:11.579 "seek_hole": false, 00:19:11.579 "seek_data": false, 00:19:11.579 "copy": true, 00:19:11.579 "nvme_iov_md": false 00:19:11.579 }, 00:19:11.579 "memory_domains": [ 00:19:11.579 { 00:19:11.579 "dma_device_id": "system", 00:19:11.579 "dma_device_type": 1 00:19:11.579 }, 00:19:11.579 { 00:19:11.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.579 "dma_device_type": 2 00:19:11.579 } 00:19:11.579 ], 00:19:11.579 "driver_specific": {} 00:19:11.579 }' 00:19:11.579 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.837 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.837 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:11.837 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.837 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.837 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:11.837 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:12.095 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:12.095 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:12.095 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:12.095 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:12.095 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:12.095 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:12.354 [2024-07-12 08:45:47.504118] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.354 [2024-07-12 08:45:47.504380] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.354 [2024-07-12 08:45:47.504586] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.354 [2024-07-12 08:45:47.504691] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.354 [2024-07-12 08:45:47.504883] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 125969 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 125969 ']' 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 125969 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125969 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125969' 00:19:12.354 killing process with pid 125969 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 125969 00:19:12.354 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 125969 00:19:12.354 [2024-07-12 08:45:47.545152] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.613 [2024-07-12 08:45:47.799939] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.983 ************************************ 00:19:13.983 END TEST raid_state_function_test 00:19:13.983 ************************************ 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:13.983 00:19:13.983 real 0m34.486s 00:19:13.983 user 1m4.942s 00:19:13.983 sys 0m3.565s 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.983 08:45:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:13.983 08:45:48 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:19:13.983 08:45:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:13.983 08:45:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.983 08:45:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.983 ************************************ 00:19:13.983 START TEST raid_state_function_test_sb 00:19:13.983 ************************************ 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:13.983 08:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=127056 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:13.983 Process raid pid: 127056 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 127056' 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 127056 /var/tmp/spdk-raid.sock 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 127056 ']' 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:13.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.983 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.983 [2024-07-12 08:45:49.116091] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:19:13.983 [2024-07-12 08:45:49.116862] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.239 [2024-07-12 08:45:49.308405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.496 [2024-07-12 08:45:49.571354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.754 [2024-07-12 08:45:49.792794] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.010 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.010 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:15.010 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:15.268 [2024-07-12 08:45:50.394850] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.268 [2024-07-12 08:45:50.395211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.268 [2024-07-12 08:45:50.395353] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.268 [2024-07-12 08:45:50.395435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.268 [2024-07-12 08:45:50.395563] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:15.268 [2024-07-12 08:45:50.395624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.268 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.526 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.526 "name": "Existed_Raid", 00:19:15.526 "uuid": "62ada833-7978-4e7b-9427-163c5d6dc9b9", 00:19:15.526 "strip_size_kb": 64, 00:19:15.526 "state": "configuring", 00:19:15.526 "raid_level": "raid0", 00:19:15.526 "superblock": true, 00:19:15.526 "num_base_bdevs": 3, 00:19:15.526 "num_base_bdevs_discovered": 0, 00:19:15.526 "num_base_bdevs_operational": 3, 00:19:15.526 "base_bdevs_list": [ 00:19:15.526 { 00:19:15.526 "name": "BaseBdev1", 00:19:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.526 "is_configured": false, 00:19:15.526 "data_offset": 0, 00:19:15.526 "data_size": 0 00:19:15.526 }, 00:19:15.526 { 00:19:15.526 "name": "BaseBdev2", 00:19:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.526 "is_configured": false, 00:19:15.526 "data_offset": 0, 00:19:15.526 "data_size": 0 00:19:15.526 }, 00:19:15.526 { 00:19:15.526 "name": "BaseBdev3", 00:19:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.526 "is_configured": false, 00:19:15.526 "data_offset": 0, 00:19:15.526 "data_size": 0 00:19:15.526 } 00:19:15.526 ] 00:19:15.526 }' 00:19:15.526 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.526 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.505 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:16.763 [2024-07-12 08:45:51.814955] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:16.763 [2024-07-12 08:45:51.815266] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:16.763 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:17.021 [2024-07-12 08:45:52.139081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:17.021 [2024-07-12 08:45:52.139352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:17.021 [2024-07-12 08:45:52.139469] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:17.021 [2024-07-12 08:45:52.139529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:17.021 [2024-07-12 08:45:52.139653] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:17.021 [2024-07-12 08:45:52.139814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:17.021 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.279 [2024-07-12 08:45:52.460766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.279 BaseBdev1 00:19:17.537 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:17.537 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:17.537 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:17.537 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:17.537 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:17.537 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:17.537 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.793 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:18.051 [ 00:19:18.051 { 00:19:18.051 "name": "BaseBdev1", 00:19:18.051 "aliases": [ 00:19:18.051 "7837be70-a38a-45c2-b8db-5728b2f0b636" 00:19:18.051 ], 00:19:18.051 "product_name": "Malloc disk", 00:19:18.051 "block_size": 512, 00:19:18.051 "num_blocks": 65536, 00:19:18.051 "uuid": "7837be70-a38a-45c2-b8db-5728b2f0b636", 00:19:18.051 "assigned_rate_limits": { 00:19:18.051 "rw_ios_per_sec": 0, 00:19:18.051 "rw_mbytes_per_sec": 0, 00:19:18.051 "r_mbytes_per_sec": 0, 00:19:18.051 "w_mbytes_per_sec": 0 00:19:18.051 }, 00:19:18.051 "claimed": true, 00:19:18.051 "claim_type": "exclusive_write", 00:19:18.051 "zoned": false, 00:19:18.051 "supported_io_types": { 00:19:18.051 "read": true, 00:19:18.051 "write": true, 00:19:18.051 "unmap": true, 00:19:18.051 "flush": true, 00:19:18.051 "reset": true, 00:19:18.051 "nvme_admin": false, 00:19:18.051 "nvme_io": false, 00:19:18.051 "nvme_io_md": false, 00:19:18.051 "write_zeroes": true, 00:19:18.051 "zcopy": true, 00:19:18.051 "get_zone_info": false, 00:19:18.051 "zone_management": false, 00:19:18.051 "zone_append": false, 00:19:18.051 "compare": false, 00:19:18.051 "compare_and_write": false, 00:19:18.051 "abort": true, 00:19:18.051 "seek_hole": false, 00:19:18.051 "seek_data": false, 00:19:18.051 "copy": true, 00:19:18.051 "nvme_iov_md": false 00:19:18.051 }, 00:19:18.051 "memory_domains": [ 00:19:18.051 { 00:19:18.051 "dma_device_id": "system", 00:19:18.051 "dma_device_type": 1 00:19:18.051 }, 00:19:18.051 { 00:19:18.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.051 "dma_device_type": 2 00:19:18.051 } 00:19:18.051 ], 00:19:18.051 "driver_specific": {} 00:19:18.051 } 00:19:18.051 ] 00:19:18.051 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:18.051 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:18.051 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.052 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.309 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.309 "name": "Existed_Raid", 00:19:18.309 "uuid": "f2dee0bc-83c2-46e5-af5d-a68dd25f810b", 00:19:18.309 "strip_size_kb": 64, 00:19:18.309 "state": "configuring", 00:19:18.309 "raid_level": "raid0", 00:19:18.309 "superblock": true, 00:19:18.309 "num_base_bdevs": 3, 00:19:18.309 "num_base_bdevs_discovered": 1, 00:19:18.309 "num_base_bdevs_operational": 3, 00:19:18.309 "base_bdevs_list": [ 00:19:18.309 { 00:19:18.309 "name": "BaseBdev1", 00:19:18.309 "uuid": "7837be70-a38a-45c2-b8db-5728b2f0b636", 00:19:18.309 "is_configured": true, 00:19:18.309 "data_offset": 2048, 00:19:18.309 "data_size": 63488 00:19:18.309 }, 00:19:18.309 { 00:19:18.309 "name": "BaseBdev2", 00:19:18.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.309 "is_configured": false, 00:19:18.309 "data_offset": 0, 00:19:18.309 "data_size": 0 00:19:18.309 }, 00:19:18.309 { 00:19:18.309 "name": "BaseBdev3", 00:19:18.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.309 "is_configured": false, 00:19:18.309 "data_offset": 0, 00:19:18.309 "data_size": 0 00:19:18.309 } 00:19:18.309 ] 00:19:18.309 }' 00:19:18.309 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.309 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.242 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:19.242 [2024-07-12 08:45:54.373344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:19.242 [2024-07-12 08:45:54.373584] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:19:19.242 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:19.501 [2024-07-12 08:45:54.645464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.501 [2024-07-12 08:45:54.647994] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:19.501 [2024-07-12 08:45:54.648247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:19.501 [2024-07-12 08:45:54.648419] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:19.501 [2024-07-12 08:45:54.648508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.501 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.758 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.758 "name": "Existed_Raid", 00:19:19.758 "uuid": "b15416dc-ef43-4488-9f6d-4c39c9f6ca0a", 00:19:19.758 "strip_size_kb": 64, 00:19:19.758 "state": "configuring", 00:19:19.758 "raid_level": "raid0", 00:19:19.758 "superblock": true, 00:19:19.758 "num_base_bdevs": 3, 00:19:19.758 "num_base_bdevs_discovered": 1, 00:19:19.758 "num_base_bdevs_operational": 3, 00:19:19.758 "base_bdevs_list": [ 00:19:19.758 { 00:19:19.758 "name": "BaseBdev1", 00:19:19.758 "uuid": "7837be70-a38a-45c2-b8db-5728b2f0b636", 00:19:19.758 "is_configured": true, 00:19:19.758 "data_offset": 2048, 00:19:19.758 "data_size": 63488 00:19:19.758 }, 00:19:19.758 { 00:19:19.758 "name": "BaseBdev2", 00:19:19.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.758 "is_configured": false, 00:19:19.758 "data_offset": 0, 00:19:19.758 "data_size": 0 00:19:19.758 }, 00:19:19.758 { 00:19:19.758 "name": "BaseBdev3", 00:19:19.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.758 "is_configured": false, 00:19:19.758 "data_offset": 0, 00:19:19.758 "data_size": 0 00:19:19.758 } 00:19:19.758 ] 00:19:19.758 }' 00:19:19.758 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.758 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.690 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:20.948 [2024-07-12 08:45:56.006927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.948 BaseBdev2 00:19:20.948 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:20.948 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:20.948 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:20.948 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:20.948 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:20.948 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:20.948 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:21.206 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:21.481 [ 00:19:21.481 { 00:19:21.481 "name": "BaseBdev2", 00:19:21.482 "aliases": [ 00:19:21.482 "b7211b67-6fd2-4164-8a27-f10510fb853c" 00:19:21.482 ], 00:19:21.482 "product_name": "Malloc disk", 00:19:21.482 "block_size": 512, 00:19:21.482 "num_blocks": 65536, 00:19:21.482 "uuid": "b7211b67-6fd2-4164-8a27-f10510fb853c", 00:19:21.482 "assigned_rate_limits": { 00:19:21.482 "rw_ios_per_sec": 0, 00:19:21.482 "rw_mbytes_per_sec": 0, 00:19:21.482 "r_mbytes_per_sec": 0, 00:19:21.482 "w_mbytes_per_sec": 0 00:19:21.482 }, 00:19:21.482 "claimed": true, 00:19:21.482 "claim_type": "exclusive_write", 00:19:21.482 "zoned": false, 00:19:21.482 "supported_io_types": { 00:19:21.482 "read": true, 00:19:21.482 "write": true, 00:19:21.482 "unmap": true, 00:19:21.482 "flush": true, 00:19:21.482 "reset": true, 00:19:21.482 "nvme_admin": false, 00:19:21.482 "nvme_io": false, 00:19:21.482 "nvme_io_md": false, 00:19:21.482 "write_zeroes": true, 00:19:21.482 "zcopy": true, 00:19:21.482 "get_zone_info": false, 00:19:21.482 "zone_management": false, 00:19:21.482 "zone_append": false, 00:19:21.482 "compare": false, 00:19:21.482 "compare_and_write": false, 00:19:21.482 "abort": true, 00:19:21.482 "seek_hole": false, 00:19:21.482 "seek_data": false, 00:19:21.482 "copy": true, 00:19:21.482 "nvme_iov_md": false 00:19:21.482 }, 00:19:21.482 "memory_domains": [ 00:19:21.482 { 00:19:21.482 "dma_device_id": "system", 00:19:21.482 "dma_device_type": 1 00:19:21.482 }, 00:19:21.482 { 00:19:21.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.482 "dma_device_type": 2 00:19:21.482 } 00:19:21.482 ], 00:19:21.482 "driver_specific": {} 00:19:21.482 } 00:19:21.482 ] 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.482 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.048 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.048 "name": "Existed_Raid", 00:19:22.048 "uuid": "b15416dc-ef43-4488-9f6d-4c39c9f6ca0a", 00:19:22.048 "strip_size_kb": 64, 00:19:22.048 "state": "configuring", 00:19:22.048 "raid_level": "raid0", 00:19:22.048 "superblock": true, 00:19:22.048 "num_base_bdevs": 3, 00:19:22.048 "num_base_bdevs_discovered": 2, 00:19:22.048 "num_base_bdevs_operational": 3, 00:19:22.048 "base_bdevs_list": [ 00:19:22.048 { 00:19:22.048 "name": "BaseBdev1", 00:19:22.048 "uuid": "7837be70-a38a-45c2-b8db-5728b2f0b636", 00:19:22.048 "is_configured": true, 00:19:22.048 "data_offset": 2048, 00:19:22.048 "data_size": 63488 00:19:22.048 }, 00:19:22.048 { 00:19:22.048 "name": "BaseBdev2", 00:19:22.048 "uuid": "b7211b67-6fd2-4164-8a27-f10510fb853c", 00:19:22.048 "is_configured": true, 00:19:22.048 "data_offset": 2048, 00:19:22.048 "data_size": 63488 00:19:22.048 }, 00:19:22.048 { 00:19:22.048 "name": "BaseBdev3", 00:19:22.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.048 "is_configured": false, 00:19:22.048 "data_offset": 0, 00:19:22.048 "data_size": 0 00:19:22.048 } 00:19:22.048 ] 00:19:22.048 }' 00:19:22.048 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.048 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.614 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:22.872 [2024-07-12 08:45:58.026065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:22.872 [2024-07-12 08:45:58.026597] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:22.872 [2024-07-12 08:45:58.026726] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:22.872 [2024-07-12 08:45:58.026911] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:22.872 [2024-07-12 08:45:58.027335] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:22.872 [2024-07-12 08:45:58.027466] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:22.872 BaseBdev3 00:19:22.872 [2024-07-12 08:45:58.027763] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.872 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:22.872 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:22.872 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:22.872 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:22.872 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:22.872 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:22.872 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.438 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:23.696 [ 00:19:23.696 { 00:19:23.696 "name": "BaseBdev3", 00:19:23.696 "aliases": [ 00:19:23.696 "676e11fe-3e4b-4106-b8c2-fd2ee63d29f1" 00:19:23.696 ], 00:19:23.696 "product_name": "Malloc disk", 00:19:23.696 "block_size": 512, 00:19:23.696 "num_blocks": 65536, 00:19:23.696 "uuid": "676e11fe-3e4b-4106-b8c2-fd2ee63d29f1", 00:19:23.696 "assigned_rate_limits": { 00:19:23.696 "rw_ios_per_sec": 0, 00:19:23.696 "rw_mbytes_per_sec": 0, 00:19:23.696 "r_mbytes_per_sec": 0, 00:19:23.696 "w_mbytes_per_sec": 0 00:19:23.696 }, 00:19:23.696 "claimed": true, 00:19:23.696 "claim_type": "exclusive_write", 00:19:23.696 "zoned": false, 00:19:23.696 "supported_io_types": { 00:19:23.696 "read": true, 00:19:23.696 "write": true, 00:19:23.696 "unmap": true, 00:19:23.696 "flush": true, 00:19:23.696 "reset": true, 00:19:23.696 "nvme_admin": false, 00:19:23.696 "nvme_io": false, 00:19:23.696 "nvme_io_md": false, 00:19:23.696 "write_zeroes": true, 00:19:23.696 "zcopy": true, 00:19:23.696 "get_zone_info": false, 00:19:23.696 "zone_management": false, 00:19:23.696 "zone_append": false, 00:19:23.696 "compare": false, 00:19:23.696 "compare_and_write": false, 00:19:23.696 "abort": true, 00:19:23.696 "seek_hole": false, 00:19:23.696 "seek_data": false, 00:19:23.696 "copy": true, 00:19:23.696 "nvme_iov_md": false 00:19:23.696 }, 00:19:23.696 "memory_domains": [ 00:19:23.696 { 00:19:23.696 "dma_device_id": "system", 00:19:23.696 "dma_device_type": 1 00:19:23.696 }, 00:19:23.696 { 00:19:23.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.696 "dma_device_type": 2 00:19:23.696 } 00:19:23.696 ], 00:19:23.696 "driver_specific": {} 00:19:23.696 } 00:19:23.696 ] 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.696 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.954 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.954 "name": "Existed_Raid", 00:19:23.954 "uuid": "b15416dc-ef43-4488-9f6d-4c39c9f6ca0a", 00:19:23.954 "strip_size_kb": 64, 00:19:23.954 "state": "online", 00:19:23.954 "raid_level": "raid0", 00:19:23.954 "superblock": true, 00:19:23.954 "num_base_bdevs": 3, 00:19:23.954 "num_base_bdevs_discovered": 3, 00:19:23.954 "num_base_bdevs_operational": 3, 00:19:23.954 "base_bdevs_list": [ 00:19:23.954 { 00:19:23.954 "name": "BaseBdev1", 00:19:23.954 "uuid": "7837be70-a38a-45c2-b8db-5728b2f0b636", 00:19:23.954 "is_configured": true, 00:19:23.954 "data_offset": 2048, 00:19:23.954 "data_size": 63488 00:19:23.954 }, 00:19:23.954 { 00:19:23.954 "name": "BaseBdev2", 00:19:23.954 "uuid": "b7211b67-6fd2-4164-8a27-f10510fb853c", 00:19:23.954 "is_configured": true, 00:19:23.954 "data_offset": 2048, 00:19:23.954 "data_size": 63488 00:19:23.954 }, 00:19:23.954 { 00:19:23.954 "name": "BaseBdev3", 00:19:23.954 "uuid": "676e11fe-3e4b-4106-b8c2-fd2ee63d29f1", 00:19:23.954 "is_configured": true, 00:19:23.954 "data_offset": 2048, 00:19:23.954 "data_size": 63488 00:19:23.954 } 00:19:23.954 ] 00:19:23.954 }' 00:19:23.954 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.954 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:24.885 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:24.886 [2024-07-12 08:45:59.998944] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.886 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:24.886 "name": "Existed_Raid", 00:19:24.886 "aliases": [ 00:19:24.886 "b15416dc-ef43-4488-9f6d-4c39c9f6ca0a" 00:19:24.886 ], 00:19:24.886 "product_name": "Raid Volume", 00:19:24.886 "block_size": 512, 00:19:24.886 "num_blocks": 190464, 00:19:24.886 "uuid": "b15416dc-ef43-4488-9f6d-4c39c9f6ca0a", 00:19:24.886 "assigned_rate_limits": { 00:19:24.886 "rw_ios_per_sec": 0, 00:19:24.886 "rw_mbytes_per_sec": 0, 00:19:24.886 "r_mbytes_per_sec": 0, 00:19:24.886 "w_mbytes_per_sec": 0 00:19:24.886 }, 00:19:24.886 "claimed": false, 00:19:24.886 "zoned": false, 00:19:24.886 "supported_io_types": { 00:19:24.886 "read": true, 00:19:24.886 "write": true, 00:19:24.886 "unmap": true, 00:19:24.886 "flush": true, 00:19:24.886 "reset": true, 00:19:24.886 "nvme_admin": false, 00:19:24.886 "nvme_io": false, 00:19:24.886 "nvme_io_md": false, 00:19:24.886 "write_zeroes": true, 00:19:24.886 "zcopy": false, 00:19:24.886 "get_zone_info": false, 00:19:24.886 "zone_management": false, 00:19:24.886 "zone_append": false, 00:19:24.886 "compare": false, 00:19:24.886 "compare_and_write": false, 00:19:24.886 "abort": false, 00:19:24.886 "seek_hole": false, 00:19:24.886 "seek_data": false, 00:19:24.886 "copy": false, 00:19:24.886 "nvme_iov_md": false 00:19:24.886 }, 00:19:24.886 "memory_domains": [ 00:19:24.886 { 00:19:24.886 "dma_device_id": "system", 00:19:24.886 "dma_device_type": 1 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.886 "dma_device_type": 2 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "dma_device_id": "system", 00:19:24.886 "dma_device_type": 1 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.886 "dma_device_type": 2 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "dma_device_id": "system", 00:19:24.886 "dma_device_type": 1 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.886 "dma_device_type": 2 00:19:24.886 } 00:19:24.886 ], 00:19:24.886 "driver_specific": { 00:19:24.886 "raid": { 00:19:24.886 "uuid": "b15416dc-ef43-4488-9f6d-4c39c9f6ca0a", 00:19:24.886 "strip_size_kb": 64, 00:19:24.886 "state": "online", 00:19:24.886 "raid_level": "raid0", 00:19:24.886 "superblock": true, 00:19:24.886 "num_base_bdevs": 3, 00:19:24.886 "num_base_bdevs_discovered": 3, 00:19:24.886 "num_base_bdevs_operational": 3, 00:19:24.886 "base_bdevs_list": [ 00:19:24.886 { 00:19:24.886 "name": "BaseBdev1", 00:19:24.886 "uuid": "7837be70-a38a-45c2-b8db-5728b2f0b636", 00:19:24.886 "is_configured": true, 00:19:24.886 "data_offset": 2048, 00:19:24.886 "data_size": 63488 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "name": "BaseBdev2", 00:19:24.886 "uuid": "b7211b67-6fd2-4164-8a27-f10510fb853c", 00:19:24.886 "is_configured": true, 00:19:24.886 "data_offset": 2048, 00:19:24.886 "data_size": 63488 00:19:24.886 }, 00:19:24.886 { 00:19:24.886 "name": "BaseBdev3", 00:19:24.886 "uuid": "676e11fe-3e4b-4106-b8c2-fd2ee63d29f1", 00:19:24.886 "is_configured": true, 00:19:24.886 "data_offset": 2048, 00:19:24.886 "data_size": 63488 00:19:24.886 } 00:19:24.886 ] 00:19:24.886 } 00:19:24.886 } 00:19:24.886 }' 00:19:24.886 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.886 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:24.886 BaseBdev2 00:19:24.886 BaseBdev3' 00:19:24.886 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.886 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:24.886 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.197 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.197 "name": "BaseBdev1", 00:19:25.197 "aliases": [ 00:19:25.197 "7837be70-a38a-45c2-b8db-5728b2f0b636" 00:19:25.197 ], 00:19:25.197 "product_name": "Malloc disk", 00:19:25.197 "block_size": 512, 00:19:25.197 "num_blocks": 65536, 00:19:25.197 "uuid": "7837be70-a38a-45c2-b8db-5728b2f0b636", 00:19:25.197 "assigned_rate_limits": { 00:19:25.197 "rw_ios_per_sec": 0, 00:19:25.197 "rw_mbytes_per_sec": 0, 00:19:25.197 "r_mbytes_per_sec": 0, 00:19:25.197 "w_mbytes_per_sec": 0 00:19:25.197 }, 00:19:25.197 "claimed": true, 00:19:25.197 "claim_type": "exclusive_write", 00:19:25.197 "zoned": false, 00:19:25.197 "supported_io_types": { 00:19:25.197 "read": true, 00:19:25.197 "write": true, 00:19:25.197 "unmap": true, 00:19:25.197 "flush": true, 00:19:25.197 "reset": true, 00:19:25.197 "nvme_admin": false, 00:19:25.197 "nvme_io": false, 00:19:25.197 "nvme_io_md": false, 00:19:25.197 "write_zeroes": true, 00:19:25.197 "zcopy": true, 00:19:25.197 "get_zone_info": false, 00:19:25.197 "zone_management": false, 00:19:25.197 "zone_append": false, 00:19:25.197 "compare": false, 00:19:25.197 "compare_and_write": false, 00:19:25.197 "abort": true, 00:19:25.197 "seek_hole": false, 00:19:25.197 "seek_data": false, 00:19:25.197 "copy": true, 00:19:25.197 "nvme_iov_md": false 00:19:25.197 }, 00:19:25.197 "memory_domains": [ 00:19:25.197 { 00:19:25.197 "dma_device_id": "system", 00:19:25.197 "dma_device_type": 1 00:19:25.197 }, 00:19:25.197 { 00:19:25.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.197 "dma_device_type": 2 00:19:25.197 } 00:19:25.197 ], 00:19:25.197 "driver_specific": {} 00:19:25.197 }' 00:19:25.197 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.469 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.469 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.469 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.470 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.470 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.470 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.470 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.727 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.727 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.727 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.727 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.727 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.727 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:25.727 08:46:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.985 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.985 "name": "BaseBdev2", 00:19:25.985 "aliases": [ 00:19:25.985 "b7211b67-6fd2-4164-8a27-f10510fb853c" 00:19:25.985 ], 00:19:25.985 "product_name": "Malloc disk", 00:19:25.985 "block_size": 512, 00:19:25.985 "num_blocks": 65536, 00:19:25.985 "uuid": "b7211b67-6fd2-4164-8a27-f10510fb853c", 00:19:25.985 "assigned_rate_limits": { 00:19:25.985 "rw_ios_per_sec": 0, 00:19:25.985 "rw_mbytes_per_sec": 0, 00:19:25.985 "r_mbytes_per_sec": 0, 00:19:25.985 "w_mbytes_per_sec": 0 00:19:25.985 }, 00:19:25.985 "claimed": true, 00:19:25.985 "claim_type": "exclusive_write", 00:19:25.985 "zoned": false, 00:19:25.985 "supported_io_types": { 00:19:25.985 "read": true, 00:19:25.985 "write": true, 00:19:25.985 "unmap": true, 00:19:25.985 "flush": true, 00:19:25.985 "reset": true, 00:19:25.985 "nvme_admin": false, 00:19:25.985 "nvme_io": false, 00:19:25.985 "nvme_io_md": false, 00:19:25.985 "write_zeroes": true, 00:19:25.985 "zcopy": true, 00:19:25.985 "get_zone_info": false, 00:19:25.985 "zone_management": false, 00:19:25.985 "zone_append": false, 00:19:25.985 "compare": false, 00:19:25.985 "compare_and_write": false, 00:19:25.985 "abort": true, 00:19:25.985 "seek_hole": false, 00:19:25.985 "seek_data": false, 00:19:25.985 "copy": true, 00:19:25.985 "nvme_iov_md": false 00:19:25.985 }, 00:19:25.985 "memory_domains": [ 00:19:25.985 { 00:19:25.985 "dma_device_id": "system", 00:19:25.985 "dma_device_type": 1 00:19:25.985 }, 00:19:25.985 { 00:19:25.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.985 "dma_device_type": 2 00:19:25.985 } 00:19:25.985 ], 00:19:25.985 "driver_specific": {} 00:19:25.985 }' 00:19:25.985 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.985 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.243 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:26.243 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.243 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.243 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.243 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.243 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.501 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.501 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.501 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.501 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.501 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.501 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:26.501 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.759 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.759 "name": "BaseBdev3", 00:19:26.759 "aliases": [ 00:19:26.759 "676e11fe-3e4b-4106-b8c2-fd2ee63d29f1" 00:19:26.759 ], 00:19:26.759 "product_name": "Malloc disk", 00:19:26.759 "block_size": 512, 00:19:26.759 "num_blocks": 65536, 00:19:26.759 "uuid": "676e11fe-3e4b-4106-b8c2-fd2ee63d29f1", 00:19:26.759 "assigned_rate_limits": { 00:19:26.759 "rw_ios_per_sec": 0, 00:19:26.759 "rw_mbytes_per_sec": 0, 00:19:26.759 "r_mbytes_per_sec": 0, 00:19:26.759 "w_mbytes_per_sec": 0 00:19:26.759 }, 00:19:26.759 "claimed": true, 00:19:26.759 "claim_type": "exclusive_write", 00:19:26.759 "zoned": false, 00:19:26.759 "supported_io_types": { 00:19:26.759 "read": true, 00:19:26.759 "write": true, 00:19:26.759 "unmap": true, 00:19:26.759 "flush": true, 00:19:26.759 "reset": true, 00:19:26.759 "nvme_admin": false, 00:19:26.759 "nvme_io": false, 00:19:26.759 "nvme_io_md": false, 00:19:26.759 "write_zeroes": true, 00:19:26.759 "zcopy": true, 00:19:26.759 "get_zone_info": false, 00:19:26.759 "zone_management": false, 00:19:26.759 "zone_append": false, 00:19:26.759 "compare": false, 00:19:26.759 "compare_and_write": false, 00:19:26.759 "abort": true, 00:19:26.759 "seek_hole": false, 00:19:26.759 "seek_data": false, 00:19:26.759 "copy": true, 00:19:26.759 "nvme_iov_md": false 00:19:26.759 }, 00:19:26.759 "memory_domains": [ 00:19:26.759 { 00:19:26.759 "dma_device_id": "system", 00:19:26.759 "dma_device_type": 1 00:19:26.759 }, 00:19:26.759 { 00:19:26.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.759 "dma_device_type": 2 00:19:26.759 } 00:19:26.759 ], 00:19:26.759 "driver_specific": {} 00:19:26.759 }' 00:19:26.759 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.759 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.759 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:26.759 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.016 08:46:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.016 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:27.016 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.017 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.017 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.017 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.274 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.274 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.274 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:27.532 [2024-07-12 08:46:02.563389] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:27.532 [2024-07-12 08:46:02.563633] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.532 [2024-07-12 08:46:02.563811] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.532 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.791 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.791 "name": "Existed_Raid", 00:19:27.791 "uuid": "b15416dc-ef43-4488-9f6d-4c39c9f6ca0a", 00:19:27.791 "strip_size_kb": 64, 00:19:27.791 "state": "offline", 00:19:27.791 "raid_level": "raid0", 00:19:27.791 "superblock": true, 00:19:27.791 "num_base_bdevs": 3, 00:19:27.791 "num_base_bdevs_discovered": 2, 00:19:27.791 "num_base_bdevs_operational": 2, 00:19:27.791 "base_bdevs_list": [ 00:19:27.791 { 00:19:27.791 "name": null, 00:19:27.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.791 "is_configured": false, 00:19:27.791 "data_offset": 2048, 00:19:27.791 "data_size": 63488 00:19:27.791 }, 00:19:27.791 { 00:19:27.791 "name": "BaseBdev2", 00:19:27.791 "uuid": "b7211b67-6fd2-4164-8a27-f10510fb853c", 00:19:27.791 "is_configured": true, 00:19:27.791 "data_offset": 2048, 00:19:27.791 "data_size": 63488 00:19:27.791 }, 00:19:27.791 { 00:19:27.791 "name": "BaseBdev3", 00:19:27.791 "uuid": "676e11fe-3e4b-4106-b8c2-fd2ee63d29f1", 00:19:27.791 "is_configured": true, 00:19:27.791 "data_offset": 2048, 00:19:27.791 "data_size": 63488 00:19:27.791 } 00:19:27.791 ] 00:19:27.791 }' 00:19:27.791 08:46:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.791 08:46:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.723 08:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:28.723 08:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:28.723 08:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.723 08:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:28.981 08:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:28.981 08:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:28.981 08:46:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:29.239 [2024-07-12 08:46:04.188620] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.239 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:29.239 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:29.239 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.239 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:29.497 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:29.497 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.497 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:29.755 [2024-07-12 08:46:04.864324] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:29.755 [2024-07-12 08:46:04.864563] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:30.013 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:30.013 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:30.013 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.013 08:46:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:30.271 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:30.271 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:30.271 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:30.271 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:30.271 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:30.271 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:30.529 BaseBdev2 00:19:30.529 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:30.529 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:30.529 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:30.529 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:30.529 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:30.529 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:30.529 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.786 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:31.044 [ 00:19:31.044 { 00:19:31.044 "name": "BaseBdev2", 00:19:31.044 "aliases": [ 00:19:31.044 "011ccadb-07cb-4e7b-a3d1-edbfa0726e70" 00:19:31.044 ], 00:19:31.044 "product_name": "Malloc disk", 00:19:31.044 "block_size": 512, 00:19:31.044 "num_blocks": 65536, 00:19:31.044 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:31.044 "assigned_rate_limits": { 00:19:31.044 "rw_ios_per_sec": 0, 00:19:31.044 "rw_mbytes_per_sec": 0, 00:19:31.044 "r_mbytes_per_sec": 0, 00:19:31.044 "w_mbytes_per_sec": 0 00:19:31.044 }, 00:19:31.044 "claimed": false, 00:19:31.044 "zoned": false, 00:19:31.044 "supported_io_types": { 00:19:31.044 "read": true, 00:19:31.044 "write": true, 00:19:31.044 "unmap": true, 00:19:31.044 "flush": true, 00:19:31.044 "reset": true, 00:19:31.044 "nvme_admin": false, 00:19:31.044 "nvme_io": false, 00:19:31.044 "nvme_io_md": false, 00:19:31.044 "write_zeroes": true, 00:19:31.044 "zcopy": true, 00:19:31.044 "get_zone_info": false, 00:19:31.044 "zone_management": false, 00:19:31.044 "zone_append": false, 00:19:31.044 "compare": false, 00:19:31.044 "compare_and_write": false, 00:19:31.044 "abort": true, 00:19:31.044 "seek_hole": false, 00:19:31.044 "seek_data": false, 00:19:31.044 "copy": true, 00:19:31.044 "nvme_iov_md": false 00:19:31.044 }, 00:19:31.044 "memory_domains": [ 00:19:31.044 { 00:19:31.044 "dma_device_id": "system", 00:19:31.044 "dma_device_type": 1 00:19:31.044 }, 00:19:31.044 { 00:19:31.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.044 "dma_device_type": 2 00:19:31.044 } 00:19:31.044 ], 00:19:31.044 "driver_specific": {} 00:19:31.044 } 00:19:31.044 ] 00:19:31.044 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:31.044 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:31.044 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:31.044 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.302 BaseBdev3 00:19:31.302 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:31.302 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:31.302 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:31.302 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:31.302 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:31.302 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:31.302 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.560 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:31.818 [ 00:19:31.818 { 00:19:31.818 "name": "BaseBdev3", 00:19:31.818 "aliases": [ 00:19:31.818 "26cf3651-d556-4f72-a3bb-5ce33858c3e4" 00:19:31.818 ], 00:19:31.818 "product_name": "Malloc disk", 00:19:31.818 "block_size": 512, 00:19:31.818 "num_blocks": 65536, 00:19:31.818 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:31.818 "assigned_rate_limits": { 00:19:31.818 "rw_ios_per_sec": 0, 00:19:31.818 "rw_mbytes_per_sec": 0, 00:19:31.818 "r_mbytes_per_sec": 0, 00:19:31.818 "w_mbytes_per_sec": 0 00:19:31.818 }, 00:19:31.818 "claimed": false, 00:19:31.818 "zoned": false, 00:19:31.818 "supported_io_types": { 00:19:31.818 "read": true, 00:19:31.818 "write": true, 00:19:31.818 "unmap": true, 00:19:31.818 "flush": true, 00:19:31.818 "reset": true, 00:19:31.818 "nvme_admin": false, 00:19:31.818 "nvme_io": false, 00:19:31.818 "nvme_io_md": false, 00:19:31.818 "write_zeroes": true, 00:19:31.818 "zcopy": true, 00:19:31.818 "get_zone_info": false, 00:19:31.818 "zone_management": false, 00:19:31.818 "zone_append": false, 00:19:31.818 "compare": false, 00:19:31.818 "compare_and_write": false, 00:19:31.818 "abort": true, 00:19:31.818 "seek_hole": false, 00:19:31.818 "seek_data": false, 00:19:31.818 "copy": true, 00:19:31.818 "nvme_iov_md": false 00:19:31.818 }, 00:19:31.818 "memory_domains": [ 00:19:31.818 { 00:19:31.818 "dma_device_id": "system", 00:19:31.818 "dma_device_type": 1 00:19:31.818 }, 00:19:31.818 { 00:19:31.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.818 "dma_device_type": 2 00:19:31.818 } 00:19:31.818 ], 00:19:31.818 "driver_specific": {} 00:19:31.818 } 00:19:31.818 ] 00:19:31.818 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:31.818 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:31.818 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:31.818 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:32.096 [2024-07-12 08:46:07.174996] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:32.096 [2024-07-12 08:46:07.175222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:32.096 [2024-07-12 08:46:07.175515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.096 [2024-07-12 08:46:07.177681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.096 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.353 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.353 "name": "Existed_Raid", 00:19:32.353 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:32.353 "strip_size_kb": 64, 00:19:32.353 "state": "configuring", 00:19:32.353 "raid_level": "raid0", 00:19:32.353 "superblock": true, 00:19:32.353 "num_base_bdevs": 3, 00:19:32.353 "num_base_bdevs_discovered": 2, 00:19:32.353 "num_base_bdevs_operational": 3, 00:19:32.353 "base_bdevs_list": [ 00:19:32.353 { 00:19:32.353 "name": "BaseBdev1", 00:19:32.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.353 "is_configured": false, 00:19:32.353 "data_offset": 0, 00:19:32.353 "data_size": 0 00:19:32.353 }, 00:19:32.353 { 00:19:32.353 "name": "BaseBdev2", 00:19:32.353 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:32.353 "is_configured": true, 00:19:32.353 "data_offset": 2048, 00:19:32.353 "data_size": 63488 00:19:32.353 }, 00:19:32.353 { 00:19:32.353 "name": "BaseBdev3", 00:19:32.353 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:32.353 "is_configured": true, 00:19:32.353 "data_offset": 2048, 00:19:32.353 "data_size": 63488 00:19:32.353 } 00:19:32.353 ] 00:19:32.353 }' 00:19:32.353 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.353 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:33.287 [2024-07-12 08:46:08.459469] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.287 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.852 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.852 "name": "Existed_Raid", 00:19:33.852 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:33.852 "strip_size_kb": 64, 00:19:33.852 "state": "configuring", 00:19:33.852 "raid_level": "raid0", 00:19:33.852 "superblock": true, 00:19:33.852 "num_base_bdevs": 3, 00:19:33.852 "num_base_bdevs_discovered": 1, 00:19:33.852 "num_base_bdevs_operational": 3, 00:19:33.852 "base_bdevs_list": [ 00:19:33.852 { 00:19:33.852 "name": "BaseBdev1", 00:19:33.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.852 "is_configured": false, 00:19:33.852 "data_offset": 0, 00:19:33.852 "data_size": 0 00:19:33.852 }, 00:19:33.852 { 00:19:33.852 "name": null, 00:19:33.852 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:33.852 "is_configured": false, 00:19:33.852 "data_offset": 2048, 00:19:33.852 "data_size": 63488 00:19:33.852 }, 00:19:33.852 { 00:19:33.852 "name": "BaseBdev3", 00:19:33.852 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:33.852 "is_configured": true, 00:19:33.852 "data_offset": 2048, 00:19:33.852 "data_size": 63488 00:19:33.852 } 00:19:33.852 ] 00:19:33.852 }' 00:19:33.852 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.852 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.417 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.417 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:34.674 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:34.674 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:34.931 [2024-07-12 08:46:10.104771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.931 BaseBdev1 00:19:34.931 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:34.931 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:34.931 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:34.931 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:34.931 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:34.931 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:34.931 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.189 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:35.479 [ 00:19:35.479 { 00:19:35.479 "name": "BaseBdev1", 00:19:35.479 "aliases": [ 00:19:35.479 "6489752f-5786-423c-9dc1-d2a621f38acd" 00:19:35.479 ], 00:19:35.479 "product_name": "Malloc disk", 00:19:35.479 "block_size": 512, 00:19:35.479 "num_blocks": 65536, 00:19:35.479 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:35.479 "assigned_rate_limits": { 00:19:35.479 "rw_ios_per_sec": 0, 00:19:35.479 "rw_mbytes_per_sec": 0, 00:19:35.479 "r_mbytes_per_sec": 0, 00:19:35.479 "w_mbytes_per_sec": 0 00:19:35.479 }, 00:19:35.479 "claimed": true, 00:19:35.479 "claim_type": "exclusive_write", 00:19:35.479 "zoned": false, 00:19:35.479 "supported_io_types": { 00:19:35.479 "read": true, 00:19:35.479 "write": true, 00:19:35.479 "unmap": true, 00:19:35.479 "flush": true, 00:19:35.479 "reset": true, 00:19:35.479 "nvme_admin": false, 00:19:35.479 "nvme_io": false, 00:19:35.479 "nvme_io_md": false, 00:19:35.479 "write_zeroes": true, 00:19:35.479 "zcopy": true, 00:19:35.479 "get_zone_info": false, 00:19:35.479 "zone_management": false, 00:19:35.479 "zone_append": false, 00:19:35.479 "compare": false, 00:19:35.479 "compare_and_write": false, 00:19:35.479 "abort": true, 00:19:35.479 "seek_hole": false, 00:19:35.479 "seek_data": false, 00:19:35.479 "copy": true, 00:19:35.479 "nvme_iov_md": false 00:19:35.479 }, 00:19:35.479 "memory_domains": [ 00:19:35.479 { 00:19:35.479 "dma_device_id": "system", 00:19:35.479 "dma_device_type": 1 00:19:35.479 }, 00:19:35.479 { 00:19:35.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.479 "dma_device_type": 2 00:19:35.479 } 00:19:35.479 ], 00:19:35.479 "driver_specific": {} 00:19:35.479 } 00:19:35.479 ] 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.737 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.005 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.005 "name": "Existed_Raid", 00:19:36.005 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:36.005 "strip_size_kb": 64, 00:19:36.005 "state": "configuring", 00:19:36.005 "raid_level": "raid0", 00:19:36.005 "superblock": true, 00:19:36.005 "num_base_bdevs": 3, 00:19:36.005 "num_base_bdevs_discovered": 2, 00:19:36.005 "num_base_bdevs_operational": 3, 00:19:36.005 "base_bdevs_list": [ 00:19:36.005 { 00:19:36.005 "name": "BaseBdev1", 00:19:36.005 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:36.005 "is_configured": true, 00:19:36.005 "data_offset": 2048, 00:19:36.005 "data_size": 63488 00:19:36.005 }, 00:19:36.005 { 00:19:36.005 "name": null, 00:19:36.005 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:36.005 "is_configured": false, 00:19:36.005 "data_offset": 2048, 00:19:36.005 "data_size": 63488 00:19:36.005 }, 00:19:36.005 { 00:19:36.005 "name": "BaseBdev3", 00:19:36.005 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:36.005 "is_configured": true, 00:19:36.005 "data_offset": 2048, 00:19:36.005 "data_size": 63488 00:19:36.005 } 00:19:36.005 ] 00:19:36.005 }' 00:19:36.005 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.005 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.573 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.573 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:36.831 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:36.831 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:37.090 [2024-07-12 08:46:12.250135] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.090 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.656 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.656 "name": "Existed_Raid", 00:19:37.656 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:37.656 "strip_size_kb": 64, 00:19:37.656 "state": "configuring", 00:19:37.656 "raid_level": "raid0", 00:19:37.656 "superblock": true, 00:19:37.656 "num_base_bdevs": 3, 00:19:37.656 "num_base_bdevs_discovered": 1, 00:19:37.656 "num_base_bdevs_operational": 3, 00:19:37.656 "base_bdevs_list": [ 00:19:37.656 { 00:19:37.656 "name": "BaseBdev1", 00:19:37.656 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:37.656 "is_configured": true, 00:19:37.656 "data_offset": 2048, 00:19:37.656 "data_size": 63488 00:19:37.656 }, 00:19:37.656 { 00:19:37.656 "name": null, 00:19:37.656 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:37.656 "is_configured": false, 00:19:37.656 "data_offset": 2048, 00:19:37.656 "data_size": 63488 00:19:37.656 }, 00:19:37.656 { 00:19:37.656 "name": null, 00:19:37.656 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:37.656 "is_configured": false, 00:19:37.656 "data_offset": 2048, 00:19:37.656 "data_size": 63488 00:19:37.656 } 00:19:37.656 ] 00:19:37.656 }' 00:19:37.656 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.656 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.221 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.221 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:38.478 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:38.478 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:39.045 [2024-07-12 08:46:13.962614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.045 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.304 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.304 "name": "Existed_Raid", 00:19:39.304 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:39.304 "strip_size_kb": 64, 00:19:39.304 "state": "configuring", 00:19:39.304 "raid_level": "raid0", 00:19:39.304 "superblock": true, 00:19:39.304 "num_base_bdevs": 3, 00:19:39.304 "num_base_bdevs_discovered": 2, 00:19:39.304 "num_base_bdevs_operational": 3, 00:19:39.304 "base_bdevs_list": [ 00:19:39.304 { 00:19:39.304 "name": "BaseBdev1", 00:19:39.304 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:39.304 "is_configured": true, 00:19:39.304 "data_offset": 2048, 00:19:39.304 "data_size": 63488 00:19:39.304 }, 00:19:39.304 { 00:19:39.304 "name": null, 00:19:39.304 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:39.304 "is_configured": false, 00:19:39.304 "data_offset": 2048, 00:19:39.304 "data_size": 63488 00:19:39.304 }, 00:19:39.304 { 00:19:39.304 "name": "BaseBdev3", 00:19:39.304 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:39.304 "is_configured": true, 00:19:39.304 "data_offset": 2048, 00:19:39.304 "data_size": 63488 00:19:39.304 } 00:19:39.304 ] 00:19:39.304 }' 00:19:39.304 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.304 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.869 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.869 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:40.434 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:40.434 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:40.693 [2024-07-12 08:46:15.635019] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.693 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.952 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.952 "name": "Existed_Raid", 00:19:40.952 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:40.952 "strip_size_kb": 64, 00:19:40.952 "state": "configuring", 00:19:40.952 "raid_level": "raid0", 00:19:40.952 "superblock": true, 00:19:40.952 "num_base_bdevs": 3, 00:19:40.952 "num_base_bdevs_discovered": 1, 00:19:40.952 "num_base_bdevs_operational": 3, 00:19:40.952 "base_bdevs_list": [ 00:19:40.952 { 00:19:40.952 "name": null, 00:19:40.952 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:40.952 "is_configured": false, 00:19:40.952 "data_offset": 2048, 00:19:40.952 "data_size": 63488 00:19:40.952 }, 00:19:40.952 { 00:19:40.952 "name": null, 00:19:40.952 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:40.952 "is_configured": false, 00:19:40.952 "data_offset": 2048, 00:19:40.952 "data_size": 63488 00:19:40.952 }, 00:19:40.952 { 00:19:40.952 "name": "BaseBdev3", 00:19:40.952 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:40.952 "is_configured": true, 00:19:40.952 "data_offset": 2048, 00:19:40.952 "data_size": 63488 00:19:40.952 } 00:19:40.952 ] 00:19:40.952 }' 00:19:40.952 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.952 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.887 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.887 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:42.145 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:42.145 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:42.403 [2024-07-12 08:46:17.411317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.403 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.662 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.662 "name": "Existed_Raid", 00:19:42.662 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:42.662 "strip_size_kb": 64, 00:19:42.662 "state": "configuring", 00:19:42.662 "raid_level": "raid0", 00:19:42.662 "superblock": true, 00:19:42.662 "num_base_bdevs": 3, 00:19:42.662 "num_base_bdevs_discovered": 2, 00:19:42.662 "num_base_bdevs_operational": 3, 00:19:42.662 "base_bdevs_list": [ 00:19:42.662 { 00:19:42.662 "name": null, 00:19:42.662 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:42.662 "is_configured": false, 00:19:42.662 "data_offset": 2048, 00:19:42.662 "data_size": 63488 00:19:42.662 }, 00:19:42.662 { 00:19:42.662 "name": "BaseBdev2", 00:19:42.662 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:42.662 "is_configured": true, 00:19:42.662 "data_offset": 2048, 00:19:42.662 "data_size": 63488 00:19:42.662 }, 00:19:42.662 { 00:19:42.662 "name": "BaseBdev3", 00:19:42.662 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:42.662 "is_configured": true, 00:19:42.662 "data_offset": 2048, 00:19:42.662 "data_size": 63488 00:19:42.662 } 00:19:42.662 ] 00:19:42.662 }' 00:19:42.662 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.662 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.613 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:43.613 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.870 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:43.870 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.870 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:44.128 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6489752f-5786-423c-9dc1-d2a621f38acd 00:19:44.386 [2024-07-12 08:46:19.458499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:44.386 [2024-07-12 08:46:19.458954] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:44.386 [2024-07-12 08:46:19.459079] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:44.386 NewBaseBdev 00:19:44.386 [2024-07-12 08:46:19.459233] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:44.386 [2024-07-12 08:46:19.459787] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:44.386 [2024-07-12 08:46:19.459912] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:19:44.386 [2024-07-12 08:46:19.460193] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.386 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:44.386 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:44.386 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:44.386 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:44.386 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:44.386 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:44.386 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:44.644 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:44.903 [ 00:19:44.903 { 00:19:44.903 "name": "NewBaseBdev", 00:19:44.903 "aliases": [ 00:19:44.903 "6489752f-5786-423c-9dc1-d2a621f38acd" 00:19:44.903 ], 00:19:44.903 "product_name": "Malloc disk", 00:19:44.903 "block_size": 512, 00:19:44.903 "num_blocks": 65536, 00:19:44.903 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:44.903 "assigned_rate_limits": { 00:19:44.903 "rw_ios_per_sec": 0, 00:19:44.903 "rw_mbytes_per_sec": 0, 00:19:44.903 "r_mbytes_per_sec": 0, 00:19:44.903 "w_mbytes_per_sec": 0 00:19:44.903 }, 00:19:44.903 "claimed": true, 00:19:44.903 "claim_type": "exclusive_write", 00:19:44.903 "zoned": false, 00:19:44.903 "supported_io_types": { 00:19:44.903 "read": true, 00:19:44.903 "write": true, 00:19:44.903 "unmap": true, 00:19:44.903 "flush": true, 00:19:44.903 "reset": true, 00:19:44.903 "nvme_admin": false, 00:19:44.903 "nvme_io": false, 00:19:44.903 "nvme_io_md": false, 00:19:44.903 "write_zeroes": true, 00:19:44.903 "zcopy": true, 00:19:44.903 "get_zone_info": false, 00:19:44.903 "zone_management": false, 00:19:44.903 "zone_append": false, 00:19:44.903 "compare": false, 00:19:44.903 "compare_and_write": false, 00:19:44.903 "abort": true, 00:19:44.903 "seek_hole": false, 00:19:44.903 "seek_data": false, 00:19:44.903 "copy": true, 00:19:44.903 "nvme_iov_md": false 00:19:44.903 }, 00:19:44.903 "memory_domains": [ 00:19:44.903 { 00:19:44.903 "dma_device_id": "system", 00:19:44.903 "dma_device_type": 1 00:19:44.903 }, 00:19:44.903 { 00:19:44.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.903 "dma_device_type": 2 00:19:44.903 } 00:19:44.903 ], 00:19:44.903 "driver_specific": {} 00:19:44.903 } 00:19:44.903 ] 00:19:44.903 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:44.903 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:44.903 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.904 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.162 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.162 "name": "Existed_Raid", 00:19:45.162 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:45.162 "strip_size_kb": 64, 00:19:45.162 "state": "online", 00:19:45.162 "raid_level": "raid0", 00:19:45.162 "superblock": true, 00:19:45.162 "num_base_bdevs": 3, 00:19:45.162 "num_base_bdevs_discovered": 3, 00:19:45.162 "num_base_bdevs_operational": 3, 00:19:45.162 "base_bdevs_list": [ 00:19:45.162 { 00:19:45.162 "name": "NewBaseBdev", 00:19:45.162 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:45.162 "is_configured": true, 00:19:45.162 "data_offset": 2048, 00:19:45.162 "data_size": 63488 00:19:45.162 }, 00:19:45.162 { 00:19:45.162 "name": "BaseBdev2", 00:19:45.162 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:45.162 "is_configured": true, 00:19:45.162 "data_offset": 2048, 00:19:45.162 "data_size": 63488 00:19:45.162 }, 00:19:45.162 { 00:19:45.162 "name": "BaseBdev3", 00:19:45.162 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:45.162 "is_configured": true, 00:19:45.162 "data_offset": 2048, 00:19:45.162 "data_size": 63488 00:19:45.162 } 00:19:45.162 ] 00:19:45.162 }' 00:19:45.162 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.162 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:46.096 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:46.096 [2024-07-12 08:46:21.219324] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.096 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:46.096 "name": "Existed_Raid", 00:19:46.096 "aliases": [ 00:19:46.096 "dfa15805-397f-4649-887d-91a85e256228" 00:19:46.096 ], 00:19:46.096 "product_name": "Raid Volume", 00:19:46.096 "block_size": 512, 00:19:46.096 "num_blocks": 190464, 00:19:46.096 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:46.096 "assigned_rate_limits": { 00:19:46.096 "rw_ios_per_sec": 0, 00:19:46.096 "rw_mbytes_per_sec": 0, 00:19:46.096 "r_mbytes_per_sec": 0, 00:19:46.096 "w_mbytes_per_sec": 0 00:19:46.096 }, 00:19:46.096 "claimed": false, 00:19:46.096 "zoned": false, 00:19:46.096 "supported_io_types": { 00:19:46.096 "read": true, 00:19:46.096 "write": true, 00:19:46.096 "unmap": true, 00:19:46.096 "flush": true, 00:19:46.096 "reset": true, 00:19:46.096 "nvme_admin": false, 00:19:46.096 "nvme_io": false, 00:19:46.096 "nvme_io_md": false, 00:19:46.096 "write_zeroes": true, 00:19:46.096 "zcopy": false, 00:19:46.096 "get_zone_info": false, 00:19:46.096 "zone_management": false, 00:19:46.096 "zone_append": false, 00:19:46.096 "compare": false, 00:19:46.096 "compare_and_write": false, 00:19:46.096 "abort": false, 00:19:46.096 "seek_hole": false, 00:19:46.096 "seek_data": false, 00:19:46.096 "copy": false, 00:19:46.096 "nvme_iov_md": false 00:19:46.096 }, 00:19:46.096 "memory_domains": [ 00:19:46.096 { 00:19:46.096 "dma_device_id": "system", 00:19:46.096 "dma_device_type": 1 00:19:46.096 }, 00:19:46.096 { 00:19:46.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.096 "dma_device_type": 2 00:19:46.096 }, 00:19:46.096 { 00:19:46.096 "dma_device_id": "system", 00:19:46.096 "dma_device_type": 1 00:19:46.096 }, 00:19:46.096 { 00:19:46.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.096 "dma_device_type": 2 00:19:46.096 }, 00:19:46.096 { 00:19:46.096 "dma_device_id": "system", 00:19:46.096 "dma_device_type": 1 00:19:46.096 }, 00:19:46.096 { 00:19:46.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.096 "dma_device_type": 2 00:19:46.096 } 00:19:46.096 ], 00:19:46.096 "driver_specific": { 00:19:46.096 "raid": { 00:19:46.096 "uuid": "dfa15805-397f-4649-887d-91a85e256228", 00:19:46.096 "strip_size_kb": 64, 00:19:46.096 "state": "online", 00:19:46.096 "raid_level": "raid0", 00:19:46.096 "superblock": true, 00:19:46.096 "num_base_bdevs": 3, 00:19:46.096 "num_base_bdevs_discovered": 3, 00:19:46.096 "num_base_bdevs_operational": 3, 00:19:46.096 "base_bdevs_list": [ 00:19:46.096 { 00:19:46.096 "name": "NewBaseBdev", 00:19:46.096 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:46.096 "is_configured": true, 00:19:46.096 "data_offset": 2048, 00:19:46.096 "data_size": 63488 00:19:46.096 }, 00:19:46.096 { 00:19:46.096 "name": "BaseBdev2", 00:19:46.097 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:46.097 "is_configured": true, 00:19:46.097 "data_offset": 2048, 00:19:46.097 "data_size": 63488 00:19:46.097 }, 00:19:46.097 { 00:19:46.097 "name": "BaseBdev3", 00:19:46.097 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:46.097 "is_configured": true, 00:19:46.097 "data_offset": 2048, 00:19:46.097 "data_size": 63488 00:19:46.097 } 00:19:46.097 ] 00:19:46.097 } 00:19:46.097 } 00:19:46.097 }' 00:19:46.097 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:46.354 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:46.354 BaseBdev2 00:19:46.354 BaseBdev3' 00:19:46.354 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:46.354 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:46.354 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:46.613 "name": "NewBaseBdev", 00:19:46.613 "aliases": [ 00:19:46.613 "6489752f-5786-423c-9dc1-d2a621f38acd" 00:19:46.613 ], 00:19:46.613 "product_name": "Malloc disk", 00:19:46.613 "block_size": 512, 00:19:46.613 "num_blocks": 65536, 00:19:46.613 "uuid": "6489752f-5786-423c-9dc1-d2a621f38acd", 00:19:46.613 "assigned_rate_limits": { 00:19:46.613 "rw_ios_per_sec": 0, 00:19:46.613 "rw_mbytes_per_sec": 0, 00:19:46.613 "r_mbytes_per_sec": 0, 00:19:46.613 "w_mbytes_per_sec": 0 00:19:46.613 }, 00:19:46.613 "claimed": true, 00:19:46.613 "claim_type": "exclusive_write", 00:19:46.613 "zoned": false, 00:19:46.613 "supported_io_types": { 00:19:46.613 "read": true, 00:19:46.613 "write": true, 00:19:46.613 "unmap": true, 00:19:46.613 "flush": true, 00:19:46.613 "reset": true, 00:19:46.613 "nvme_admin": false, 00:19:46.613 "nvme_io": false, 00:19:46.613 "nvme_io_md": false, 00:19:46.613 "write_zeroes": true, 00:19:46.613 "zcopy": true, 00:19:46.613 "get_zone_info": false, 00:19:46.613 "zone_management": false, 00:19:46.613 "zone_append": false, 00:19:46.613 "compare": false, 00:19:46.613 "compare_and_write": false, 00:19:46.613 "abort": true, 00:19:46.613 "seek_hole": false, 00:19:46.613 "seek_data": false, 00:19:46.613 "copy": true, 00:19:46.613 "nvme_iov_md": false 00:19:46.613 }, 00:19:46.613 "memory_domains": [ 00:19:46.613 { 00:19:46.613 "dma_device_id": "system", 00:19:46.613 "dma_device_type": 1 00:19:46.613 }, 00:19:46.613 { 00:19:46.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.613 "dma_device_type": 2 00:19:46.613 } 00:19:46.613 ], 00:19:46.613 "driver_specific": {} 00:19:46.613 }' 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:46.613 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:46.871 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:47.129 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:47.129 "name": "BaseBdev2", 00:19:47.129 "aliases": [ 00:19:47.129 "011ccadb-07cb-4e7b-a3d1-edbfa0726e70" 00:19:47.129 ], 00:19:47.129 "product_name": "Malloc disk", 00:19:47.129 "block_size": 512, 00:19:47.129 "num_blocks": 65536, 00:19:47.129 "uuid": "011ccadb-07cb-4e7b-a3d1-edbfa0726e70", 00:19:47.129 "assigned_rate_limits": { 00:19:47.129 "rw_ios_per_sec": 0, 00:19:47.129 "rw_mbytes_per_sec": 0, 00:19:47.129 "r_mbytes_per_sec": 0, 00:19:47.129 "w_mbytes_per_sec": 0 00:19:47.129 }, 00:19:47.129 "claimed": true, 00:19:47.129 "claim_type": "exclusive_write", 00:19:47.129 "zoned": false, 00:19:47.129 "supported_io_types": { 00:19:47.129 "read": true, 00:19:47.129 "write": true, 00:19:47.129 "unmap": true, 00:19:47.129 "flush": true, 00:19:47.129 "reset": true, 00:19:47.129 "nvme_admin": false, 00:19:47.129 "nvme_io": false, 00:19:47.129 "nvme_io_md": false, 00:19:47.129 "write_zeroes": true, 00:19:47.129 "zcopy": true, 00:19:47.129 "get_zone_info": false, 00:19:47.129 "zone_management": false, 00:19:47.129 "zone_append": false, 00:19:47.129 "compare": false, 00:19:47.129 "compare_and_write": false, 00:19:47.129 "abort": true, 00:19:47.129 "seek_hole": false, 00:19:47.129 "seek_data": false, 00:19:47.129 "copy": true, 00:19:47.129 "nvme_iov_md": false 00:19:47.129 }, 00:19:47.129 "memory_domains": [ 00:19:47.129 { 00:19:47.129 "dma_device_id": "system", 00:19:47.129 "dma_device_type": 1 00:19:47.129 }, 00:19:47.129 { 00:19:47.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.129 "dma_device_type": 2 00:19:47.129 } 00:19:47.129 ], 00:19:47.129 "driver_specific": {} 00:19:47.129 }' 00:19:47.129 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.388 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.388 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:47.388 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:47.388 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:47.388 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:47.388 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:47.647 08:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:48.213 "name": "BaseBdev3", 00:19:48.213 "aliases": [ 00:19:48.213 "26cf3651-d556-4f72-a3bb-5ce33858c3e4" 00:19:48.213 ], 00:19:48.213 "product_name": "Malloc disk", 00:19:48.213 "block_size": 512, 00:19:48.213 "num_blocks": 65536, 00:19:48.213 "uuid": "26cf3651-d556-4f72-a3bb-5ce33858c3e4", 00:19:48.213 "assigned_rate_limits": { 00:19:48.213 "rw_ios_per_sec": 0, 00:19:48.213 "rw_mbytes_per_sec": 0, 00:19:48.213 "r_mbytes_per_sec": 0, 00:19:48.213 "w_mbytes_per_sec": 0 00:19:48.213 }, 00:19:48.213 "claimed": true, 00:19:48.213 "claim_type": "exclusive_write", 00:19:48.213 "zoned": false, 00:19:48.213 "supported_io_types": { 00:19:48.213 "read": true, 00:19:48.213 "write": true, 00:19:48.213 "unmap": true, 00:19:48.213 "flush": true, 00:19:48.213 "reset": true, 00:19:48.213 "nvme_admin": false, 00:19:48.213 "nvme_io": false, 00:19:48.213 "nvme_io_md": false, 00:19:48.213 "write_zeroes": true, 00:19:48.213 "zcopy": true, 00:19:48.213 "get_zone_info": false, 00:19:48.213 "zone_management": false, 00:19:48.213 "zone_append": false, 00:19:48.213 "compare": false, 00:19:48.213 "compare_and_write": false, 00:19:48.213 "abort": true, 00:19:48.213 "seek_hole": false, 00:19:48.213 "seek_data": false, 00:19:48.213 "copy": true, 00:19:48.213 "nvme_iov_md": false 00:19:48.213 }, 00:19:48.213 "memory_domains": [ 00:19:48.213 { 00:19:48.213 "dma_device_id": "system", 00:19:48.213 "dma_device_type": 1 00:19:48.213 }, 00:19:48.213 { 00:19:48.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.213 "dma_device_type": 2 00:19:48.213 } 00:19:48.213 ], 00:19:48.213 "driver_specific": {} 00:19:48.213 }' 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:48.213 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.470 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.470 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:48.470 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.470 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.470 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:48.470 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:48.727 [2024-07-12 08:46:23.907634] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:48.727 [2024-07-12 08:46:23.907885] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.727 [2024-07-12 08:46:23.908084] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.727 [2024-07-12 08:46:23.908296] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.727 [2024-07-12 08:46:23.908423] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 127056 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 127056 ']' 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 127056 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127056 00:19:48.983 killing process with pid 127056 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127056' 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 127056 00:19:48.983 08:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 127056 00:19:48.983 [2024-07-12 08:46:23.952780] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.241 [2024-07-12 08:46:24.200362] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.172 ************************************ 00:19:50.172 END TEST raid_state_function_test_sb 00:19:50.172 ************************************ 00:19:50.172 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:50.172 00:19:50.172 real 0m36.316s 00:19:50.172 user 1m8.302s 00:19:50.172 sys 0m3.919s 00:19:50.172 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:50.172 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.172 08:46:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:50.172 08:46:25 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:19:50.172 08:46:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:50.172 08:46:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.172 08:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 ************************************ 00:19:50.430 START TEST raid_superblock_test 00:19:50.430 ************************************ 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=128155 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 128155 /var/tmp/spdk-raid.sock 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 128155 ']' 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:50.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.430 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 [2024-07-12 08:46:25.443755] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:19:50.431 [2024-07-12 08:46:25.444893] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128155 ] 00:19:50.431 [2024-07-12 08:46:25.617215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.996 [2024-07-12 08:46:25.883006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.996 [2024-07-12 08:46:26.079873] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:51.253 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:51.510 malloc1 00:19:51.510 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:52.074 [2024-07-12 08:46:26.976027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:52.074 [2024-07-12 08:46:26.976440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.074 [2024-07-12 08:46:26.976592] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:52.074 [2024-07-12 08:46:26.976711] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.074 [2024-07-12 08:46:26.979424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.074 [2024-07-12 08:46:26.979588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:52.074 pt1 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:52.074 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:52.074 malloc2 00:19:52.331 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:52.332 [2024-07-12 08:46:27.498448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:52.332 [2024-07-12 08:46:27.498848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.332 [2024-07-12 08:46:27.498999] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:52.332 [2024-07-12 08:46:27.499122] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.332 [2024-07-12 08:46:27.501729] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.332 [2024-07-12 08:46:27.501889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:52.332 pt2 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:52.332 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:52.589 malloc3 00:19:52.849 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:52.849 [2024-07-12 08:46:28.017263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:52.849 [2024-07-12 08:46:28.017632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.849 [2024-07-12 08:46:28.017783] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:19:52.849 [2024-07-12 08:46:28.017904] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.849 [2024-07-12 08:46:28.020542] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.849 [2024-07-12 08:46:28.020713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:52.849 pt3 00:19:52.849 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:52.849 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:52.849 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:53.106 [2024-07-12 08:46:28.293567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:53.106 [2024-07-12 08:46:28.296011] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:53.106 [2024-07-12 08:46:28.296244] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:53.106 [2024-07-12 08:46:28.296628] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:53.106 [2024-07-12 08:46:28.296750] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:53.106 [2024-07-12 08:46:28.297016] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:53.106 [2024-07-12 08:46:28.297537] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:53.106 [2024-07-12 08:46:28.297654] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:19:53.106 [2024-07-12 08:46:28.297962] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.364 "name": "raid_bdev1", 00:19:53.364 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:19:53.364 "strip_size_kb": 64, 00:19:53.364 "state": "online", 00:19:53.364 "raid_level": "raid0", 00:19:53.364 "superblock": true, 00:19:53.364 "num_base_bdevs": 3, 00:19:53.364 "num_base_bdevs_discovered": 3, 00:19:53.364 "num_base_bdevs_operational": 3, 00:19:53.364 "base_bdevs_list": [ 00:19:53.364 { 00:19:53.364 "name": "pt1", 00:19:53.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:53.364 "is_configured": true, 00:19:53.364 "data_offset": 2048, 00:19:53.364 "data_size": 63488 00:19:53.364 }, 00:19:53.364 { 00:19:53.364 "name": "pt2", 00:19:53.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.364 "is_configured": true, 00:19:53.364 "data_offset": 2048, 00:19:53.364 "data_size": 63488 00:19:53.364 }, 00:19:53.364 { 00:19:53.364 "name": "pt3", 00:19:53.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:53.364 "is_configured": true, 00:19:53.364 "data_offset": 2048, 00:19:53.364 "data_size": 63488 00:19:53.364 } 00:19:53.364 ] 00:19:53.364 }' 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.364 08:46:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:54.325 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:54.582 [2024-07-12 08:46:29.538540] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.582 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:54.582 "name": "raid_bdev1", 00:19:54.582 "aliases": [ 00:19:54.582 "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1" 00:19:54.582 ], 00:19:54.582 "product_name": "Raid Volume", 00:19:54.582 "block_size": 512, 00:19:54.582 "num_blocks": 190464, 00:19:54.582 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:19:54.582 "assigned_rate_limits": { 00:19:54.582 "rw_ios_per_sec": 0, 00:19:54.582 "rw_mbytes_per_sec": 0, 00:19:54.582 "r_mbytes_per_sec": 0, 00:19:54.582 "w_mbytes_per_sec": 0 00:19:54.582 }, 00:19:54.582 "claimed": false, 00:19:54.582 "zoned": false, 00:19:54.582 "supported_io_types": { 00:19:54.582 "read": true, 00:19:54.582 "write": true, 00:19:54.582 "unmap": true, 00:19:54.582 "flush": true, 00:19:54.582 "reset": true, 00:19:54.582 "nvme_admin": false, 00:19:54.582 "nvme_io": false, 00:19:54.582 "nvme_io_md": false, 00:19:54.582 "write_zeroes": true, 00:19:54.582 "zcopy": false, 00:19:54.582 "get_zone_info": false, 00:19:54.583 "zone_management": false, 00:19:54.583 "zone_append": false, 00:19:54.583 "compare": false, 00:19:54.583 "compare_and_write": false, 00:19:54.583 "abort": false, 00:19:54.583 "seek_hole": false, 00:19:54.583 "seek_data": false, 00:19:54.583 "copy": false, 00:19:54.583 "nvme_iov_md": false 00:19:54.583 }, 00:19:54.583 "memory_domains": [ 00:19:54.583 { 00:19:54.583 "dma_device_id": "system", 00:19:54.583 "dma_device_type": 1 00:19:54.583 }, 00:19:54.583 { 00:19:54.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.583 "dma_device_type": 2 00:19:54.583 }, 00:19:54.583 { 00:19:54.583 "dma_device_id": "system", 00:19:54.583 "dma_device_type": 1 00:19:54.583 }, 00:19:54.583 { 00:19:54.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.583 "dma_device_type": 2 00:19:54.583 }, 00:19:54.583 { 00:19:54.583 "dma_device_id": "system", 00:19:54.583 "dma_device_type": 1 00:19:54.583 }, 00:19:54.583 { 00:19:54.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.583 "dma_device_type": 2 00:19:54.583 } 00:19:54.583 ], 00:19:54.583 "driver_specific": { 00:19:54.583 "raid": { 00:19:54.583 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:19:54.583 "strip_size_kb": 64, 00:19:54.583 "state": "online", 00:19:54.583 "raid_level": "raid0", 00:19:54.583 "superblock": true, 00:19:54.583 "num_base_bdevs": 3, 00:19:54.583 "num_base_bdevs_discovered": 3, 00:19:54.583 "num_base_bdevs_operational": 3, 00:19:54.583 "base_bdevs_list": [ 00:19:54.583 { 00:19:54.583 "name": "pt1", 00:19:54.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:54.583 "is_configured": true, 00:19:54.583 "data_offset": 2048, 00:19:54.583 "data_size": 63488 00:19:54.583 }, 00:19:54.583 { 00:19:54.583 "name": "pt2", 00:19:54.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:54.583 "is_configured": true, 00:19:54.583 "data_offset": 2048, 00:19:54.583 "data_size": 63488 00:19:54.583 }, 00:19:54.583 { 00:19:54.583 "name": "pt3", 00:19:54.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:54.583 "is_configured": true, 00:19:54.583 "data_offset": 2048, 00:19:54.583 "data_size": 63488 00:19:54.583 } 00:19:54.583 ] 00:19:54.583 } 00:19:54.583 } 00:19:54.583 }' 00:19:54.583 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:54.583 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:54.583 pt2 00:19:54.583 pt3' 00:19:54.583 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:54.583 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:54.583 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:54.840 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:54.840 "name": "pt1", 00:19:54.840 "aliases": [ 00:19:54.840 "00000000-0000-0000-0000-000000000001" 00:19:54.840 ], 00:19:54.840 "product_name": "passthru", 00:19:54.840 "block_size": 512, 00:19:54.840 "num_blocks": 65536, 00:19:54.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:54.840 "assigned_rate_limits": { 00:19:54.840 "rw_ios_per_sec": 0, 00:19:54.840 "rw_mbytes_per_sec": 0, 00:19:54.840 "r_mbytes_per_sec": 0, 00:19:54.840 "w_mbytes_per_sec": 0 00:19:54.840 }, 00:19:54.840 "claimed": true, 00:19:54.840 "claim_type": "exclusive_write", 00:19:54.840 "zoned": false, 00:19:54.840 "supported_io_types": { 00:19:54.840 "read": true, 00:19:54.840 "write": true, 00:19:54.840 "unmap": true, 00:19:54.840 "flush": true, 00:19:54.840 "reset": true, 00:19:54.840 "nvme_admin": false, 00:19:54.840 "nvme_io": false, 00:19:54.840 "nvme_io_md": false, 00:19:54.840 "write_zeroes": true, 00:19:54.840 "zcopy": true, 00:19:54.840 "get_zone_info": false, 00:19:54.840 "zone_management": false, 00:19:54.840 "zone_append": false, 00:19:54.840 "compare": false, 00:19:54.840 "compare_and_write": false, 00:19:54.840 "abort": true, 00:19:54.840 "seek_hole": false, 00:19:54.840 "seek_data": false, 00:19:54.840 "copy": true, 00:19:54.840 "nvme_iov_md": false 00:19:54.840 }, 00:19:54.840 "memory_domains": [ 00:19:54.840 { 00:19:54.840 "dma_device_id": "system", 00:19:54.840 "dma_device_type": 1 00:19:54.840 }, 00:19:54.840 { 00:19:54.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.840 "dma_device_type": 2 00:19:54.840 } 00:19:54.840 ], 00:19:54.840 "driver_specific": { 00:19:54.840 "passthru": { 00:19:54.840 "name": "pt1", 00:19:54.840 "base_bdev_name": "malloc1" 00:19:54.840 } 00:19:54.840 } 00:19:54.840 }' 00:19:54.840 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.840 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.840 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:54.840 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.840 08:46:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.096 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:55.096 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.096 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.096 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.096 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.096 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.353 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.353 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.353 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:55.353 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:55.610 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:55.610 "name": "pt2", 00:19:55.610 "aliases": [ 00:19:55.610 "00000000-0000-0000-0000-000000000002" 00:19:55.610 ], 00:19:55.610 "product_name": "passthru", 00:19:55.610 "block_size": 512, 00:19:55.610 "num_blocks": 65536, 00:19:55.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.610 "assigned_rate_limits": { 00:19:55.610 "rw_ios_per_sec": 0, 00:19:55.610 "rw_mbytes_per_sec": 0, 00:19:55.610 "r_mbytes_per_sec": 0, 00:19:55.610 "w_mbytes_per_sec": 0 00:19:55.610 }, 00:19:55.610 "claimed": true, 00:19:55.610 "claim_type": "exclusive_write", 00:19:55.610 "zoned": false, 00:19:55.610 "supported_io_types": { 00:19:55.610 "read": true, 00:19:55.610 "write": true, 00:19:55.610 "unmap": true, 00:19:55.610 "flush": true, 00:19:55.610 "reset": true, 00:19:55.610 "nvme_admin": false, 00:19:55.610 "nvme_io": false, 00:19:55.610 "nvme_io_md": false, 00:19:55.610 "write_zeroes": true, 00:19:55.610 "zcopy": true, 00:19:55.610 "get_zone_info": false, 00:19:55.610 "zone_management": false, 00:19:55.610 "zone_append": false, 00:19:55.610 "compare": false, 00:19:55.610 "compare_and_write": false, 00:19:55.610 "abort": true, 00:19:55.610 "seek_hole": false, 00:19:55.610 "seek_data": false, 00:19:55.610 "copy": true, 00:19:55.610 "nvme_iov_md": false 00:19:55.610 }, 00:19:55.610 "memory_domains": [ 00:19:55.610 { 00:19:55.610 "dma_device_id": "system", 00:19:55.610 "dma_device_type": 1 00:19:55.610 }, 00:19:55.610 { 00:19:55.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.610 "dma_device_type": 2 00:19:55.610 } 00:19:55.610 ], 00:19:55.610 "driver_specific": { 00:19:55.610 "passthru": { 00:19:55.610 "name": "pt2", 00:19:55.610 "base_bdev_name": "malloc2" 00:19:55.610 } 00:19:55.610 } 00:19:55.610 }' 00:19:55.610 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.611 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.611 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:55.611 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.611 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.611 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:55.611 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.868 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.868 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.868 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.868 08:46:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.868 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.868 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.868 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:55.868 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.126 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.126 "name": "pt3", 00:19:56.126 "aliases": [ 00:19:56.126 "00000000-0000-0000-0000-000000000003" 00:19:56.126 ], 00:19:56.126 "product_name": "passthru", 00:19:56.126 "block_size": 512, 00:19:56.126 "num_blocks": 65536, 00:19:56.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.126 "assigned_rate_limits": { 00:19:56.126 "rw_ios_per_sec": 0, 00:19:56.126 "rw_mbytes_per_sec": 0, 00:19:56.126 "r_mbytes_per_sec": 0, 00:19:56.126 "w_mbytes_per_sec": 0 00:19:56.126 }, 00:19:56.126 "claimed": true, 00:19:56.126 "claim_type": "exclusive_write", 00:19:56.126 "zoned": false, 00:19:56.126 "supported_io_types": { 00:19:56.126 "read": true, 00:19:56.126 "write": true, 00:19:56.126 "unmap": true, 00:19:56.126 "flush": true, 00:19:56.126 "reset": true, 00:19:56.126 "nvme_admin": false, 00:19:56.126 "nvme_io": false, 00:19:56.126 "nvme_io_md": false, 00:19:56.126 "write_zeroes": true, 00:19:56.126 "zcopy": true, 00:19:56.126 "get_zone_info": false, 00:19:56.126 "zone_management": false, 00:19:56.126 "zone_append": false, 00:19:56.126 "compare": false, 00:19:56.126 "compare_and_write": false, 00:19:56.126 "abort": true, 00:19:56.126 "seek_hole": false, 00:19:56.126 "seek_data": false, 00:19:56.126 "copy": true, 00:19:56.126 "nvme_iov_md": false 00:19:56.126 }, 00:19:56.126 "memory_domains": [ 00:19:56.126 { 00:19:56.126 "dma_device_id": "system", 00:19:56.126 "dma_device_type": 1 00:19:56.126 }, 00:19:56.126 { 00:19:56.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.126 "dma_device_type": 2 00:19:56.126 } 00:19:56.126 ], 00:19:56.126 "driver_specific": { 00:19:56.126 "passthru": { 00:19:56.126 "name": "pt3", 00:19:56.126 "base_bdev_name": "malloc3" 00:19:56.126 } 00:19:56.126 } 00:19:56.126 }' 00:19:56.126 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.383 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.383 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.383 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.383 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.383 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.383 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.640 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.640 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.640 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.640 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.640 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:56.640 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:56.640 08:46:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:56.898 [2024-07-12 08:46:32.043076] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.898 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1 00:19:56.898 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1 ']' 00:19:56.898 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:57.155 [2024-07-12 08:46:32.346839] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.156 [2024-07-12 08:46:32.347073] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.156 [2024-07-12 08:46:32.347281] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.156 [2024-07-12 08:46:32.347460] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.156 [2024-07-12 08:46:32.347576] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:19:57.414 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.414 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:57.672 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:57.672 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:57.672 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:57.672 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:57.931 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:57.931 08:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:58.189 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:58.189 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:58.445 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:58.445 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:58.702 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:58.703 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:58.960 [2024-07-12 08:46:33.960818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:58.960 [2024-07-12 08:46:33.963357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:58.960 [2024-07-12 08:46:33.963555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:58.960 [2024-07-12 08:46:33.963730] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:58.960 [2024-07-12 08:46:33.963938] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:58.960 [2024-07-12 08:46:33.964099] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:58.960 [2024-07-12 08:46:33.964276] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.960 [2024-07-12 08:46:33.964375] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:19:58.960 request: 00:19:58.960 { 00:19:58.960 "name": "raid_bdev1", 00:19:58.960 "raid_level": "raid0", 00:19:58.961 "base_bdevs": [ 00:19:58.961 "malloc1", 00:19:58.961 "malloc2", 00:19:58.961 "malloc3" 00:19:58.961 ], 00:19:58.961 "strip_size_kb": 64, 00:19:58.961 "superblock": false, 00:19:58.961 "method": "bdev_raid_create", 00:19:58.961 "req_id": 1 00:19:58.961 } 00:19:58.961 Got JSON-RPC error response 00:19:58.961 response: 00:19:58.961 { 00:19:58.961 "code": -17, 00:19:58.961 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:58.961 } 00:19:58.961 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:58.961 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.961 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.961 08:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.961 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:58.961 08:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.219 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:59.219 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:59.219 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:59.477 [2024-07-12 08:46:34.541019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:59.477 [2024-07-12 08:46:34.541303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.477 [2024-07-12 08:46:34.541480] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:59.477 [2024-07-12 08:46:34.541597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.477 [2024-07-12 08:46:34.544322] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.477 [2024-07-12 08:46:34.544484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:59.477 [2024-07-12 08:46:34.544706] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:59.477 [2024-07-12 08:46:34.544872] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:59.477 pt1 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.477 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.734 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.734 "name": "raid_bdev1", 00:19:59.734 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:19:59.734 "strip_size_kb": 64, 00:19:59.734 "state": "configuring", 00:19:59.734 "raid_level": "raid0", 00:19:59.734 "superblock": true, 00:19:59.734 "num_base_bdevs": 3, 00:19:59.734 "num_base_bdevs_discovered": 1, 00:19:59.734 "num_base_bdevs_operational": 3, 00:19:59.734 "base_bdevs_list": [ 00:19:59.734 { 00:19:59.734 "name": "pt1", 00:19:59.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:59.734 "is_configured": true, 00:19:59.734 "data_offset": 2048, 00:19:59.734 "data_size": 63488 00:19:59.734 }, 00:19:59.734 { 00:19:59.734 "name": null, 00:19:59.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.734 "is_configured": false, 00:19:59.734 "data_offset": 2048, 00:19:59.734 "data_size": 63488 00:19:59.734 }, 00:19:59.734 { 00:19:59.734 "name": null, 00:19:59.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.734 "is_configured": false, 00:19:59.734 "data_offset": 2048, 00:19:59.734 "data_size": 63488 00:19:59.734 } 00:19:59.734 ] 00:19:59.734 }' 00:19:59.734 08:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.734 08:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.670 08:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:20:00.670 08:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.670 [2024-07-12 08:46:35.809594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.670 [2024-07-12 08:46:35.809901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.670 [2024-07-12 08:46:35.809993] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:00.670 [2024-07-12 08:46:35.810202] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.670 [2024-07-12 08:46:35.810807] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.670 [2024-07-12 08:46:35.810957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.670 [2024-07-12 08:46:35.811173] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:00.670 [2024-07-12 08:46:35.811309] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.670 pt2 00:20:00.670 08:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:00.928 [2024-07-12 08:46:36.049696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.928 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.186 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.186 "name": "raid_bdev1", 00:20:01.186 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:20:01.186 "strip_size_kb": 64, 00:20:01.186 "state": "configuring", 00:20:01.186 "raid_level": "raid0", 00:20:01.186 "superblock": true, 00:20:01.186 "num_base_bdevs": 3, 00:20:01.186 "num_base_bdevs_discovered": 1, 00:20:01.186 "num_base_bdevs_operational": 3, 00:20:01.186 "base_bdevs_list": [ 00:20:01.186 { 00:20:01.186 "name": "pt1", 00:20:01.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.186 "is_configured": true, 00:20:01.186 "data_offset": 2048, 00:20:01.186 "data_size": 63488 00:20:01.186 }, 00:20:01.186 { 00:20:01.186 "name": null, 00:20:01.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.186 "is_configured": false, 00:20:01.186 "data_offset": 2048, 00:20:01.186 "data_size": 63488 00:20:01.186 }, 00:20:01.186 { 00:20:01.186 "name": null, 00:20:01.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.186 "is_configured": false, 00:20:01.186 "data_offset": 2048, 00:20:01.186 "data_size": 63488 00:20:01.186 } 00:20:01.186 ] 00:20:01.186 }' 00:20:01.186 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.186 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.121 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:02.121 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.121 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.121 [2024-07-12 08:46:37.217933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.121 [2024-07-12 08:46:37.218264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.121 [2024-07-12 08:46:37.218341] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:02.121 [2024-07-12 08:46:37.218630] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.121 [2024-07-12 08:46:37.219309] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.121 [2024-07-12 08:46:37.219490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.121 [2024-07-12 08:46:37.219751] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:02.121 [2024-07-12 08:46:37.219939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.121 pt2 00:20:02.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:02.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:02.414 [2024-07-12 08:46:37.457984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:02.414 [2024-07-12 08:46:37.458276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.414 [2024-07-12 08:46:37.458436] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:02.414 [2024-07-12 08:46:37.458554] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.414 [2024-07-12 08:46:37.459245] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.414 [2024-07-12 08:46:37.459413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:02.414 [2024-07-12 08:46:37.459634] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:02.414 [2024-07-12 08:46:37.459768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:02.414 [2024-07-12 08:46:37.460009] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:20:02.414 [2024-07-12 08:46:37.460125] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:02.414 [2024-07-12 08:46:37.460339] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:02.414 [2024-07-12 08:46:37.460803] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:20:02.414 [2024-07-12 08:46:37.460921] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:20:02.414 [2024-07-12 08:46:37.461172] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.414 pt3 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.414 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.708 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.708 "name": "raid_bdev1", 00:20:02.708 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:20:02.708 "strip_size_kb": 64, 00:20:02.708 "state": "online", 00:20:02.708 "raid_level": "raid0", 00:20:02.708 "superblock": true, 00:20:02.708 "num_base_bdevs": 3, 00:20:02.708 "num_base_bdevs_discovered": 3, 00:20:02.708 "num_base_bdevs_operational": 3, 00:20:02.708 "base_bdevs_list": [ 00:20:02.708 { 00:20:02.708 "name": "pt1", 00:20:02.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.708 "is_configured": true, 00:20:02.708 "data_offset": 2048, 00:20:02.708 "data_size": 63488 00:20:02.708 }, 00:20:02.708 { 00:20:02.708 "name": "pt2", 00:20:02.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.708 "is_configured": true, 00:20:02.708 "data_offset": 2048, 00:20:02.708 "data_size": 63488 00:20:02.708 }, 00:20:02.708 { 00:20:02.708 "name": "pt3", 00:20:02.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.708 "is_configured": true, 00:20:02.708 "data_offset": 2048, 00:20:02.708 "data_size": 63488 00:20:02.708 } 00:20:02.708 ] 00:20:02.708 }' 00:20:02.708 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.708 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:03.275 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:03.840 [2024-07-12 08:46:38.738578] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.840 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:03.840 "name": "raid_bdev1", 00:20:03.840 "aliases": [ 00:20:03.840 "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1" 00:20:03.840 ], 00:20:03.840 "product_name": "Raid Volume", 00:20:03.840 "block_size": 512, 00:20:03.840 "num_blocks": 190464, 00:20:03.840 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:20:03.840 "assigned_rate_limits": { 00:20:03.840 "rw_ios_per_sec": 0, 00:20:03.840 "rw_mbytes_per_sec": 0, 00:20:03.840 "r_mbytes_per_sec": 0, 00:20:03.840 "w_mbytes_per_sec": 0 00:20:03.840 }, 00:20:03.840 "claimed": false, 00:20:03.840 "zoned": false, 00:20:03.840 "supported_io_types": { 00:20:03.840 "read": true, 00:20:03.840 "write": true, 00:20:03.840 "unmap": true, 00:20:03.840 "flush": true, 00:20:03.840 "reset": true, 00:20:03.840 "nvme_admin": false, 00:20:03.840 "nvme_io": false, 00:20:03.840 "nvme_io_md": false, 00:20:03.840 "write_zeroes": true, 00:20:03.840 "zcopy": false, 00:20:03.840 "get_zone_info": false, 00:20:03.840 "zone_management": false, 00:20:03.840 "zone_append": false, 00:20:03.840 "compare": false, 00:20:03.840 "compare_and_write": false, 00:20:03.840 "abort": false, 00:20:03.840 "seek_hole": false, 00:20:03.840 "seek_data": false, 00:20:03.840 "copy": false, 00:20:03.840 "nvme_iov_md": false 00:20:03.840 }, 00:20:03.840 "memory_domains": [ 00:20:03.840 { 00:20:03.840 "dma_device_id": "system", 00:20:03.840 "dma_device_type": 1 00:20:03.840 }, 00:20:03.840 { 00:20:03.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.840 "dma_device_type": 2 00:20:03.840 }, 00:20:03.840 { 00:20:03.840 "dma_device_id": "system", 00:20:03.840 "dma_device_type": 1 00:20:03.840 }, 00:20:03.840 { 00:20:03.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.840 "dma_device_type": 2 00:20:03.840 }, 00:20:03.840 { 00:20:03.840 "dma_device_id": "system", 00:20:03.840 "dma_device_type": 1 00:20:03.840 }, 00:20:03.840 { 00:20:03.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.840 "dma_device_type": 2 00:20:03.840 } 00:20:03.840 ], 00:20:03.840 "driver_specific": { 00:20:03.840 "raid": { 00:20:03.840 "uuid": "9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1", 00:20:03.840 "strip_size_kb": 64, 00:20:03.840 "state": "online", 00:20:03.840 "raid_level": "raid0", 00:20:03.840 "superblock": true, 00:20:03.840 "num_base_bdevs": 3, 00:20:03.840 "num_base_bdevs_discovered": 3, 00:20:03.840 "num_base_bdevs_operational": 3, 00:20:03.840 "base_bdevs_list": [ 00:20:03.840 { 00:20:03.840 "name": "pt1", 00:20:03.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.840 "is_configured": true, 00:20:03.840 "data_offset": 2048, 00:20:03.840 "data_size": 63488 00:20:03.840 }, 00:20:03.840 { 00:20:03.840 "name": "pt2", 00:20:03.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.840 "is_configured": true, 00:20:03.840 "data_offset": 2048, 00:20:03.840 "data_size": 63488 00:20:03.840 }, 00:20:03.840 { 00:20:03.840 "name": "pt3", 00:20:03.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:03.840 "is_configured": true, 00:20:03.840 "data_offset": 2048, 00:20:03.840 "data_size": 63488 00:20:03.840 } 00:20:03.840 ] 00:20:03.840 } 00:20:03.840 } 00:20:03.840 }' 00:20:03.840 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.840 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:03.840 pt2 00:20:03.840 pt3' 00:20:03.840 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.840 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:03.840 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:04.099 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.099 "name": "pt1", 00:20:04.099 "aliases": [ 00:20:04.099 "00000000-0000-0000-0000-000000000001" 00:20:04.099 ], 00:20:04.099 "product_name": "passthru", 00:20:04.099 "block_size": 512, 00:20:04.099 "num_blocks": 65536, 00:20:04.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.099 "assigned_rate_limits": { 00:20:04.099 "rw_ios_per_sec": 0, 00:20:04.099 "rw_mbytes_per_sec": 0, 00:20:04.099 "r_mbytes_per_sec": 0, 00:20:04.099 "w_mbytes_per_sec": 0 00:20:04.099 }, 00:20:04.099 "claimed": true, 00:20:04.099 "claim_type": "exclusive_write", 00:20:04.099 "zoned": false, 00:20:04.099 "supported_io_types": { 00:20:04.099 "read": true, 00:20:04.099 "write": true, 00:20:04.099 "unmap": true, 00:20:04.099 "flush": true, 00:20:04.099 "reset": true, 00:20:04.099 "nvme_admin": false, 00:20:04.099 "nvme_io": false, 00:20:04.099 "nvme_io_md": false, 00:20:04.099 "write_zeroes": true, 00:20:04.099 "zcopy": true, 00:20:04.099 "get_zone_info": false, 00:20:04.099 "zone_management": false, 00:20:04.099 "zone_append": false, 00:20:04.099 "compare": false, 00:20:04.099 "compare_and_write": false, 00:20:04.099 "abort": true, 00:20:04.099 "seek_hole": false, 00:20:04.099 "seek_data": false, 00:20:04.099 "copy": true, 00:20:04.099 "nvme_iov_md": false 00:20:04.099 }, 00:20:04.099 "memory_domains": [ 00:20:04.099 { 00:20:04.099 "dma_device_id": "system", 00:20:04.099 "dma_device_type": 1 00:20:04.099 }, 00:20:04.099 { 00:20:04.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.099 "dma_device_type": 2 00:20:04.099 } 00:20:04.099 ], 00:20:04.099 "driver_specific": { 00:20:04.099 "passthru": { 00:20:04.099 "name": "pt1", 00:20:04.099 "base_bdev_name": "malloc1" 00:20:04.099 } 00:20:04.099 } 00:20:04.099 }' 00:20:04.099 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.099 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.099 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:04.099 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.099 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:04.356 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:04.356 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.356 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.356 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.356 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.356 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.618 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.618 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:04.618 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:04.618 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:04.874 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:04.874 "name": "pt2", 00:20:04.874 "aliases": [ 00:20:04.874 "00000000-0000-0000-0000-000000000002" 00:20:04.874 ], 00:20:04.874 "product_name": "passthru", 00:20:04.874 "block_size": 512, 00:20:04.874 "num_blocks": 65536, 00:20:04.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.874 "assigned_rate_limits": { 00:20:04.874 "rw_ios_per_sec": 0, 00:20:04.874 "rw_mbytes_per_sec": 0, 00:20:04.874 "r_mbytes_per_sec": 0, 00:20:04.874 "w_mbytes_per_sec": 0 00:20:04.874 }, 00:20:04.874 "claimed": true, 00:20:04.874 "claim_type": "exclusive_write", 00:20:04.874 "zoned": false, 00:20:04.875 "supported_io_types": { 00:20:04.875 "read": true, 00:20:04.875 "write": true, 00:20:04.875 "unmap": true, 00:20:04.875 "flush": true, 00:20:04.875 "reset": true, 00:20:04.875 "nvme_admin": false, 00:20:04.875 "nvme_io": false, 00:20:04.875 "nvme_io_md": false, 00:20:04.875 "write_zeroes": true, 00:20:04.875 "zcopy": true, 00:20:04.875 "get_zone_info": false, 00:20:04.875 "zone_management": false, 00:20:04.875 "zone_append": false, 00:20:04.875 "compare": false, 00:20:04.875 "compare_and_write": false, 00:20:04.875 "abort": true, 00:20:04.875 "seek_hole": false, 00:20:04.875 "seek_data": false, 00:20:04.875 "copy": true, 00:20:04.875 "nvme_iov_md": false 00:20:04.875 }, 00:20:04.875 "memory_domains": [ 00:20:04.875 { 00:20:04.875 "dma_device_id": "system", 00:20:04.875 "dma_device_type": 1 00:20:04.875 }, 00:20:04.875 { 00:20:04.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.875 "dma_device_type": 2 00:20:04.875 } 00:20:04.875 ], 00:20:04.875 "driver_specific": { 00:20:04.875 "passthru": { 00:20:04.875 "name": "pt2", 00:20:04.875 "base_bdev_name": "malloc2" 00:20:04.875 } 00:20:04.875 } 00:20:04.875 }' 00:20:04.875 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.875 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:04.875 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:04.875 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.132 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.132 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.132 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.132 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.132 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.132 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.132 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.390 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.390 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.390 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:05.390 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.648 "name": "pt3", 00:20:05.648 "aliases": [ 00:20:05.648 "00000000-0000-0000-0000-000000000003" 00:20:05.648 ], 00:20:05.648 "product_name": "passthru", 00:20:05.648 "block_size": 512, 00:20:05.648 "num_blocks": 65536, 00:20:05.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:05.648 "assigned_rate_limits": { 00:20:05.648 "rw_ios_per_sec": 0, 00:20:05.648 "rw_mbytes_per_sec": 0, 00:20:05.648 "r_mbytes_per_sec": 0, 00:20:05.648 "w_mbytes_per_sec": 0 00:20:05.648 }, 00:20:05.648 "claimed": true, 00:20:05.648 "claim_type": "exclusive_write", 00:20:05.648 "zoned": false, 00:20:05.648 "supported_io_types": { 00:20:05.648 "read": true, 00:20:05.648 "write": true, 00:20:05.648 "unmap": true, 00:20:05.648 "flush": true, 00:20:05.648 "reset": true, 00:20:05.648 "nvme_admin": false, 00:20:05.648 "nvme_io": false, 00:20:05.648 "nvme_io_md": false, 00:20:05.648 "write_zeroes": true, 00:20:05.648 "zcopy": true, 00:20:05.648 "get_zone_info": false, 00:20:05.648 "zone_management": false, 00:20:05.648 "zone_append": false, 00:20:05.648 "compare": false, 00:20:05.648 "compare_and_write": false, 00:20:05.648 "abort": true, 00:20:05.648 "seek_hole": false, 00:20:05.648 "seek_data": false, 00:20:05.648 "copy": true, 00:20:05.648 "nvme_iov_md": false 00:20:05.648 }, 00:20:05.648 "memory_domains": [ 00:20:05.648 { 00:20:05.648 "dma_device_id": "system", 00:20:05.648 "dma_device_type": 1 00:20:05.648 }, 00:20:05.648 { 00:20:05.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.648 "dma_device_type": 2 00:20:05.648 } 00:20:05.648 ], 00:20:05.648 "driver_specific": { 00:20:05.648 "passthru": { 00:20:05.648 "name": "pt3", 00:20:05.648 "base_bdev_name": "malloc3" 00:20:05.648 } 00:20:05.648 } 00:20:05.648 }' 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.648 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.906 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.906 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.906 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.906 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.906 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.906 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:05.906 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:06.163 [2024-07-12 08:46:41.296493] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1 '!=' 9ddb600e-d6f2-49e4-a3f8-5acd6bf6add1 ']' 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 128155 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 128155 ']' 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 128155 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128155 00:20:06.163 killing process with pid 128155 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128155' 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 128155 00:20:06.163 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 128155 00:20:06.163 [2024-07-12 08:46:41.331641] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.163 [2024-07-12 08:46:41.331820] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.163 [2024-07-12 08:46:41.332067] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.163 [2024-07-12 08:46:41.332227] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:20:06.420 [2024-07-12 08:46:41.588358] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.794 ************************************ 00:20:07.794 END TEST raid_superblock_test 00:20:07.794 ************************************ 00:20:07.794 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:07.794 00:20:07.794 real 0m17.324s 00:20:07.794 user 0m31.623s 00:20:07.794 sys 0m1.809s 00:20:07.794 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.794 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.794 08:46:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:07.794 08:46:42 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:20:07.794 08:46:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:07.794 08:46:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.794 08:46:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.794 ************************************ 00:20:07.794 START TEST raid_read_error_test 00:20:07.794 ************************************ 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Gj5xQ63Zyx 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=128679 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 128679 /var/tmp/spdk-raid.sock 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 128679 ']' 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:07.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.794 08:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.794 [2024-07-12 08:46:42.833892] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:20:07.794 [2024-07-12 08:46:42.834320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128679 ] 00:20:08.051 [2024-07-12 08:46:42.999082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.051 [2024-07-12 08:46:43.218963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.307 [2024-07-12 08:46:43.417996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.873 08:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.873 08:46:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:08.873 08:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:08.873 08:46:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:09.131 BaseBdev1_malloc 00:20:09.131 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:09.388 true 00:20:09.388 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:09.645 [2024-07-12 08:46:44.714280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:09.645 [2024-07-12 08:46:44.715144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.645 [2024-07-12 08:46:44.715465] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:09.645 [2024-07-12 08:46:44.715741] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.645 [2024-07-12 08:46:44.718707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.645 [2024-07-12 08:46:44.719001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:09.645 BaseBdev1 00:20:09.645 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:09.645 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:09.903 BaseBdev2_malloc 00:20:09.903 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:10.161 true 00:20:10.161 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:10.726 [2024-07-12 08:46:45.640094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:10.726 [2024-07-12 08:46:45.640415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.726 [2024-07-12 08:46:45.640581] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:10.726 [2024-07-12 08:46:45.640738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.726 [2024-07-12 08:46:45.643401] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.726 [2024-07-12 08:46:45.643576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:10.726 BaseBdev2 00:20:10.726 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:10.726 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:10.984 BaseBdev3_malloc 00:20:10.984 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:11.242 true 00:20:11.242 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:11.501 [2024-07-12 08:46:46.611992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:11.501 [2024-07-12 08:46:46.612345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.501 [2024-07-12 08:46:46.612506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:11.501 [2024-07-12 08:46:46.612633] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.501 [2024-07-12 08:46:46.615277] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.501 [2024-07-12 08:46:46.615458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:11.501 BaseBdev3 00:20:11.501 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:11.759 [2024-07-12 08:46:46.900164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.759 [2024-07-12 08:46:46.902672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.759 [2024-07-12 08:46:46.902894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.759 [2024-07-12 08:46:46.903313] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:11.759 [2024-07-12 08:46:46.903461] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:11.759 [2024-07-12 08:46:46.903651] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:11.759 [2024-07-12 08:46:46.904129] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:11.759 [2024-07-12 08:46:46.904276] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:11.759 [2024-07-12 08:46:46.904630] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.759 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.017 08:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.017 "name": "raid_bdev1", 00:20:12.017 "uuid": "bcda0772-39a6-4d1b-a71e-36355415a6bf", 00:20:12.017 "strip_size_kb": 64, 00:20:12.017 "state": "online", 00:20:12.017 "raid_level": "raid0", 00:20:12.017 "superblock": true, 00:20:12.017 "num_base_bdevs": 3, 00:20:12.017 "num_base_bdevs_discovered": 3, 00:20:12.017 "num_base_bdevs_operational": 3, 00:20:12.017 "base_bdevs_list": [ 00:20:12.017 { 00:20:12.017 "name": "BaseBdev1", 00:20:12.017 "uuid": "15b0ba4f-d904-555e-80cd-99ab47db1343", 00:20:12.017 "is_configured": true, 00:20:12.017 "data_offset": 2048, 00:20:12.017 "data_size": 63488 00:20:12.017 }, 00:20:12.017 { 00:20:12.017 "name": "BaseBdev2", 00:20:12.017 "uuid": "1f0c4163-d350-5bf1-9ad9-ed96aca372aa", 00:20:12.017 "is_configured": true, 00:20:12.017 "data_offset": 2048, 00:20:12.018 "data_size": 63488 00:20:12.018 }, 00:20:12.018 { 00:20:12.018 "name": "BaseBdev3", 00:20:12.018 "uuid": "73ea864c-a19a-5c00-aba5-13f863f64a8b", 00:20:12.018 "is_configured": true, 00:20:12.018 "data_offset": 2048, 00:20:12.018 "data_size": 63488 00:20:12.018 } 00:20:12.018 ] 00:20:12.018 }' 00:20:12.018 08:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.018 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.951 08:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:12.951 08:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:12.951 [2024-07-12 08:46:47.970180] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:13.897 08:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.155 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.414 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.414 "name": "raid_bdev1", 00:20:14.414 "uuid": "bcda0772-39a6-4d1b-a71e-36355415a6bf", 00:20:14.414 "strip_size_kb": 64, 00:20:14.414 "state": "online", 00:20:14.414 "raid_level": "raid0", 00:20:14.414 "superblock": true, 00:20:14.414 "num_base_bdevs": 3, 00:20:14.414 "num_base_bdevs_discovered": 3, 00:20:14.414 "num_base_bdevs_operational": 3, 00:20:14.414 "base_bdevs_list": [ 00:20:14.414 { 00:20:14.414 "name": "BaseBdev1", 00:20:14.414 "uuid": "15b0ba4f-d904-555e-80cd-99ab47db1343", 00:20:14.414 "is_configured": true, 00:20:14.414 "data_offset": 2048, 00:20:14.414 "data_size": 63488 00:20:14.414 }, 00:20:14.414 { 00:20:14.414 "name": "BaseBdev2", 00:20:14.414 "uuid": "1f0c4163-d350-5bf1-9ad9-ed96aca372aa", 00:20:14.414 "is_configured": true, 00:20:14.414 "data_offset": 2048, 00:20:14.414 "data_size": 63488 00:20:14.414 }, 00:20:14.414 { 00:20:14.414 "name": "BaseBdev3", 00:20:14.414 "uuid": "73ea864c-a19a-5c00-aba5-13f863f64a8b", 00:20:14.414 "is_configured": true, 00:20:14.414 "data_offset": 2048, 00:20:14.414 "data_size": 63488 00:20:14.414 } 00:20:14.414 ] 00:20:14.414 }' 00:20:14.414 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.414 08:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.979 08:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:15.237 [2024-07-12 08:46:50.367265] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.237 [2024-07-12 08:46:50.367462] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.237 [2024-07-12 08:46:50.370658] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.237 [2024-07-12 08:46:50.370822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.237 [2024-07-12 08:46:50.370962] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.237 [2024-07-12 08:46:50.371127] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:15.237 0 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 128679 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 128679 ']' 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 128679 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128679 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128679' 00:20:15.237 killing process with pid 128679 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 128679 00:20:15.237 08:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 128679 00:20:15.237 [2024-07-12 08:46:50.405420] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:15.496 [2024-07-12 08:46:50.600982] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Gj5xQ63Zyx 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:16.868 ************************************ 00:20:16.868 END TEST raid_read_error_test 00:20:16.868 ************************************ 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:20:16.868 00:20:16.868 real 0m9.047s 00:20:16.868 user 0m14.131s 00:20:16.868 sys 0m1.071s 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:16.868 08:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.868 08:46:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:16.868 08:46:51 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:20:16.868 08:46:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:16.868 08:46:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.868 08:46:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.868 ************************************ 00:20:16.868 START TEST raid_write_error_test 00:20:16.868 ************************************ 00:20:16.868 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:20:16.868 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:20:16.868 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QYV1nXEjTn 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=128908 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 128908 /var/tmp/spdk-raid.sock 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 128908 ']' 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:16.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.869 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.869 [2024-07-12 08:46:51.924333] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:20:16.869 [2024-07-12 08:46:51.924815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128908 ] 00:20:17.127 [2024-07-12 08:46:52.107120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.385 [2024-07-12 08:46:52.325137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.385 [2024-07-12 08:46:52.523958] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.962 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.962 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:17.962 08:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:17.962 08:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:18.225 BaseBdev1_malloc 00:20:18.225 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:18.484 true 00:20:18.484 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:18.742 [2024-07-12 08:46:53.739576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:18.742 [2024-07-12 08:46:53.739909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.742 [2024-07-12 08:46:53.740116] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:18.742 [2024-07-12 08:46:53.740248] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.742 [2024-07-12 08:46:53.742971] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.742 [2024-07-12 08:46:53.743134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:18.742 BaseBdev1 00:20:18.742 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:18.742 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:18.999 BaseBdev2_malloc 00:20:18.999 08:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:19.257 true 00:20:19.257 08:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:19.515 [2024-07-12 08:46:54.616760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:19.515 [2024-07-12 08:46:54.617086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.515 [2024-07-12 08:46:54.617244] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:19.515 [2024-07-12 08:46:54.617361] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.515 [2024-07-12 08:46:54.619955] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.515 [2024-07-12 08:46:54.620120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:19.515 BaseBdev2 00:20:19.515 08:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:19.515 08:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:19.773 BaseBdev3_malloc 00:20:19.773 08:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:20.338 true 00:20:20.338 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:20.338 [2024-07-12 08:46:55.507788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:20.338 [2024-07-12 08:46:55.508066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.338 [2024-07-12 08:46:55.508224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:20.338 [2024-07-12 08:46:55.508400] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.338 [2024-07-12 08:46:55.511125] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.338 [2024-07-12 08:46:55.511292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:20.338 BaseBdev3 00:20:20.338 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:20.596 [2024-07-12 08:46:55.783922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:20.596 [2024-07-12 08:46:55.786283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.596 [2024-07-12 08:46:55.786505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:20.596 [2024-07-12 08:46:55.786881] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:20.596 [2024-07-12 08:46:55.787008] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:20.596 [2024-07-12 08:46:55.787202] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:20.596 [2024-07-12 08:46:55.787674] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:20.596 [2024-07-12 08:46:55.787799] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:20.596 [2024-07-12 08:46:55.788118] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.854 08:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.854 08:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.854 "name": "raid_bdev1", 00:20:20.854 "uuid": "178ba928-9582-449c-affe-21d654117bd0", 00:20:20.854 "strip_size_kb": 64, 00:20:20.854 "state": "online", 00:20:20.854 "raid_level": "raid0", 00:20:20.854 "superblock": true, 00:20:20.854 "num_base_bdevs": 3, 00:20:20.854 "num_base_bdevs_discovered": 3, 00:20:20.854 "num_base_bdevs_operational": 3, 00:20:20.854 "base_bdevs_list": [ 00:20:20.854 { 00:20:20.855 "name": "BaseBdev1", 00:20:20.855 "uuid": "d73fe746-e238-571d-add1-411e892f75ac", 00:20:20.855 "is_configured": true, 00:20:20.855 "data_offset": 2048, 00:20:20.855 "data_size": 63488 00:20:20.855 }, 00:20:20.855 { 00:20:20.855 "name": "BaseBdev2", 00:20:20.855 "uuid": "f1d2ff32-5fd0-5fbe-ac7d-1e6b0b1581f1", 00:20:20.855 "is_configured": true, 00:20:20.855 "data_offset": 2048, 00:20:20.855 "data_size": 63488 00:20:20.855 }, 00:20:20.855 { 00:20:20.855 "name": "BaseBdev3", 00:20:20.855 "uuid": "8b221dcd-d094-545e-9cd5-18e197190e81", 00:20:20.855 "is_configured": true, 00:20:20.855 "data_offset": 2048, 00:20:20.855 "data_size": 63488 00:20:20.855 } 00:20:20.855 ] 00:20:20.855 }' 00:20:20.855 08:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.855 08:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.787 08:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:21.787 08:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:21.787 [2024-07-12 08:46:56.897719] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:22.719 08:46:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.977 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.235 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.235 "name": "raid_bdev1", 00:20:23.235 "uuid": "178ba928-9582-449c-affe-21d654117bd0", 00:20:23.235 "strip_size_kb": 64, 00:20:23.235 "state": "online", 00:20:23.235 "raid_level": "raid0", 00:20:23.235 "superblock": true, 00:20:23.235 "num_base_bdevs": 3, 00:20:23.235 "num_base_bdevs_discovered": 3, 00:20:23.235 "num_base_bdevs_operational": 3, 00:20:23.235 "base_bdevs_list": [ 00:20:23.235 { 00:20:23.235 "name": "BaseBdev1", 00:20:23.235 "uuid": "d73fe746-e238-571d-add1-411e892f75ac", 00:20:23.235 "is_configured": true, 00:20:23.235 "data_offset": 2048, 00:20:23.235 "data_size": 63488 00:20:23.235 }, 00:20:23.235 { 00:20:23.235 "name": "BaseBdev2", 00:20:23.235 "uuid": "f1d2ff32-5fd0-5fbe-ac7d-1e6b0b1581f1", 00:20:23.235 "is_configured": true, 00:20:23.235 "data_offset": 2048, 00:20:23.235 "data_size": 63488 00:20:23.235 }, 00:20:23.235 { 00:20:23.235 "name": "BaseBdev3", 00:20:23.235 "uuid": "8b221dcd-d094-545e-9cd5-18e197190e81", 00:20:23.235 "is_configured": true, 00:20:23.235 "data_offset": 2048, 00:20:23.235 "data_size": 63488 00:20:23.235 } 00:20:23.235 ] 00:20:23.235 }' 00:20:23.235 08:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.235 08:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.182 08:46:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:24.439 [2024-07-12 08:46:59.384076] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.439 [2024-07-12 08:46:59.384318] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.439 [2024-07-12 08:46:59.388482] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.439 [2024-07-12 08:46:59.388771] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.439 [2024-07-12 08:46:59.388976] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.439 0 00:20:24.439 [2024-07-12 08:46:59.389114] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 128908 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 128908 ']' 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 128908 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128908 00:20:24.439 killing process with pid 128908 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128908' 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 128908 00:20:24.439 08:46:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 128908 00:20:24.439 [2024-07-12 08:46:59.425145] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:24.696 [2024-07-12 08:46:59.641678] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QYV1nXEjTn 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:26.066 ************************************ 00:20:26.066 END TEST raid_write_error_test 00:20:26.066 ************************************ 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.40 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.40 != \0\.\0\0 ]] 00:20:26.066 00:20:26.066 real 0m9.015s 00:20:26.066 user 0m14.068s 00:20:26.066 sys 0m0.981s 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.066 08:47:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.066 08:47:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:26.066 08:47:00 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:26.066 08:47:00 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:26.066 08:47:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:26.066 08:47:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.066 08:47:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.066 ************************************ 00:20:26.066 START TEST raid_state_function_test 00:20:26.066 ************************************ 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=129139 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 129139' 00:20:26.066 Process raid pid: 129139 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 129139 /var/tmp/spdk-raid.sock 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 129139 ']' 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:26.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.066 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.066 [2024-07-12 08:47:00.991943] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:20:26.066 [2024-07-12 08:47:00.992402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.066 [2024-07-12 08:47:01.164098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.322 [2024-07-12 08:47:01.421297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.578 [2024-07-12 08:47:01.638994] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.836 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.836 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:26.836 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:27.095 [2024-07-12 08:47:02.203133] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:27.095 [2024-07-12 08:47:02.203486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:27.095 [2024-07-12 08:47:02.203601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:27.095 [2024-07-12 08:47:02.203667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:27.095 [2024-07-12 08:47:02.203764] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:27.095 [2024-07-12 08:47:02.203818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.095 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.353 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.353 "name": "Existed_Raid", 00:20:27.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.353 "strip_size_kb": 64, 00:20:27.353 "state": "configuring", 00:20:27.353 "raid_level": "concat", 00:20:27.353 "superblock": false, 00:20:27.353 "num_base_bdevs": 3, 00:20:27.353 "num_base_bdevs_discovered": 0, 00:20:27.353 "num_base_bdevs_operational": 3, 00:20:27.353 "base_bdevs_list": [ 00:20:27.353 { 00:20:27.353 "name": "BaseBdev1", 00:20:27.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.353 "is_configured": false, 00:20:27.353 "data_offset": 0, 00:20:27.353 "data_size": 0 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "name": "BaseBdev2", 00:20:27.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.353 "is_configured": false, 00:20:27.353 "data_offset": 0, 00:20:27.353 "data_size": 0 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "name": "BaseBdev3", 00:20:27.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.353 "is_configured": false, 00:20:27.353 "data_offset": 0, 00:20:27.353 "data_size": 0 00:20:27.353 } 00:20:27.353 ] 00:20:27.353 }' 00:20:27.353 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.353 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.287 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:28.287 [2024-07-12 08:47:03.455268] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:28.287 [2024-07-12 08:47:03.455499] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:28.287 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:28.853 [2024-07-12 08:47:03.739332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:28.853 [2024-07-12 08:47:03.739619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:28.853 [2024-07-12 08:47:03.739728] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:28.853 [2024-07-12 08:47:03.739786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:28.853 [2024-07-12 08:47:03.739950] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:28.853 [2024-07-12 08:47:03.740017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:28.853 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:28.853 [2024-07-12 08:47:04.003534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.853 BaseBdev1 00:20:28.853 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:28.853 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:28.853 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:28.853 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:28.853 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:28.853 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:28.853 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:29.436 [ 00:20:29.436 { 00:20:29.436 "name": "BaseBdev1", 00:20:29.436 "aliases": [ 00:20:29.436 "383e1402-c0cc-4b41-b8cc-f22d97780779" 00:20:29.436 ], 00:20:29.436 "product_name": "Malloc disk", 00:20:29.436 "block_size": 512, 00:20:29.436 "num_blocks": 65536, 00:20:29.436 "uuid": "383e1402-c0cc-4b41-b8cc-f22d97780779", 00:20:29.436 "assigned_rate_limits": { 00:20:29.436 "rw_ios_per_sec": 0, 00:20:29.436 "rw_mbytes_per_sec": 0, 00:20:29.436 "r_mbytes_per_sec": 0, 00:20:29.436 "w_mbytes_per_sec": 0 00:20:29.436 }, 00:20:29.436 "claimed": true, 00:20:29.436 "claim_type": "exclusive_write", 00:20:29.436 "zoned": false, 00:20:29.436 "supported_io_types": { 00:20:29.436 "read": true, 00:20:29.436 "write": true, 00:20:29.436 "unmap": true, 00:20:29.436 "flush": true, 00:20:29.436 "reset": true, 00:20:29.436 "nvme_admin": false, 00:20:29.436 "nvme_io": false, 00:20:29.436 "nvme_io_md": false, 00:20:29.436 "write_zeroes": true, 00:20:29.436 "zcopy": true, 00:20:29.436 "get_zone_info": false, 00:20:29.436 "zone_management": false, 00:20:29.436 "zone_append": false, 00:20:29.436 "compare": false, 00:20:29.436 "compare_and_write": false, 00:20:29.436 "abort": true, 00:20:29.436 "seek_hole": false, 00:20:29.436 "seek_data": false, 00:20:29.436 "copy": true, 00:20:29.436 "nvme_iov_md": false 00:20:29.436 }, 00:20:29.436 "memory_domains": [ 00:20:29.436 { 00:20:29.436 "dma_device_id": "system", 00:20:29.436 "dma_device_type": 1 00:20:29.436 }, 00:20:29.436 { 00:20:29.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.436 "dma_device_type": 2 00:20:29.436 } 00:20:29.436 ], 00:20:29.436 "driver_specific": {} 00:20:29.436 } 00:20:29.436 ] 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.436 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.693 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.693 "name": "Existed_Raid", 00:20:29.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.693 "strip_size_kb": 64, 00:20:29.693 "state": "configuring", 00:20:29.693 "raid_level": "concat", 00:20:29.693 "superblock": false, 00:20:29.693 "num_base_bdevs": 3, 00:20:29.694 "num_base_bdevs_discovered": 1, 00:20:29.694 "num_base_bdevs_operational": 3, 00:20:29.694 "base_bdevs_list": [ 00:20:29.694 { 00:20:29.694 "name": "BaseBdev1", 00:20:29.694 "uuid": "383e1402-c0cc-4b41-b8cc-f22d97780779", 00:20:29.694 "is_configured": true, 00:20:29.694 "data_offset": 0, 00:20:29.694 "data_size": 65536 00:20:29.694 }, 00:20:29.694 { 00:20:29.694 "name": "BaseBdev2", 00:20:29.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.694 "is_configured": false, 00:20:29.694 "data_offset": 0, 00:20:29.694 "data_size": 0 00:20:29.694 }, 00:20:29.694 { 00:20:29.694 "name": "BaseBdev3", 00:20:29.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.694 "is_configured": false, 00:20:29.694 "data_offset": 0, 00:20:29.694 "data_size": 0 00:20:29.694 } 00:20:29.694 ] 00:20:29.694 }' 00:20:29.694 08:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.694 08:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.627 08:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:30.884 [2024-07-12 08:47:05.940099] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:30.884 [2024-07-12 08:47:05.940412] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:20:30.884 08:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:31.142 [2024-07-12 08:47:06.264223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.142 [2024-07-12 08:47:06.266801] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.142 [2024-07-12 08:47:06.267091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.142 [2024-07-12 08:47:06.267199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.142 [2024-07-12 08:47:06.267280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.142 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.143 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.143 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.401 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.401 "name": "Existed_Raid", 00:20:31.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.401 "strip_size_kb": 64, 00:20:31.401 "state": "configuring", 00:20:31.401 "raid_level": "concat", 00:20:31.401 "superblock": false, 00:20:31.401 "num_base_bdevs": 3, 00:20:31.401 "num_base_bdevs_discovered": 1, 00:20:31.401 "num_base_bdevs_operational": 3, 00:20:31.401 "base_bdevs_list": [ 00:20:31.401 { 00:20:31.401 "name": "BaseBdev1", 00:20:31.401 "uuid": "383e1402-c0cc-4b41-b8cc-f22d97780779", 00:20:31.401 "is_configured": true, 00:20:31.401 "data_offset": 0, 00:20:31.401 "data_size": 65536 00:20:31.401 }, 00:20:31.401 { 00:20:31.401 "name": "BaseBdev2", 00:20:31.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.401 "is_configured": false, 00:20:31.401 "data_offset": 0, 00:20:31.401 "data_size": 0 00:20:31.401 }, 00:20:31.401 { 00:20:31.401 "name": "BaseBdev3", 00:20:31.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.401 "is_configured": false, 00:20:31.401 "data_offset": 0, 00:20:31.401 "data_size": 0 00:20:31.401 } 00:20:31.401 ] 00:20:31.401 }' 00:20:31.401 08:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.401 08:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.334 08:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:32.591 [2024-07-12 08:47:07.554733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:32.591 BaseBdev2 00:20:32.591 08:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:32.591 08:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:32.591 08:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:32.591 08:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:32.591 08:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:32.591 08:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:32.591 08:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:32.849 08:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:33.108 [ 00:20:33.108 { 00:20:33.108 "name": "BaseBdev2", 00:20:33.108 "aliases": [ 00:20:33.108 "70233b20-af5d-475b-b2e1-9c80587dd6a4" 00:20:33.108 ], 00:20:33.108 "product_name": "Malloc disk", 00:20:33.108 "block_size": 512, 00:20:33.108 "num_blocks": 65536, 00:20:33.108 "uuid": "70233b20-af5d-475b-b2e1-9c80587dd6a4", 00:20:33.108 "assigned_rate_limits": { 00:20:33.108 "rw_ios_per_sec": 0, 00:20:33.108 "rw_mbytes_per_sec": 0, 00:20:33.108 "r_mbytes_per_sec": 0, 00:20:33.108 "w_mbytes_per_sec": 0 00:20:33.108 }, 00:20:33.108 "claimed": true, 00:20:33.108 "claim_type": "exclusive_write", 00:20:33.108 "zoned": false, 00:20:33.108 "supported_io_types": { 00:20:33.108 "read": true, 00:20:33.108 "write": true, 00:20:33.108 "unmap": true, 00:20:33.108 "flush": true, 00:20:33.108 "reset": true, 00:20:33.108 "nvme_admin": false, 00:20:33.108 "nvme_io": false, 00:20:33.108 "nvme_io_md": false, 00:20:33.108 "write_zeroes": true, 00:20:33.108 "zcopy": true, 00:20:33.108 "get_zone_info": false, 00:20:33.108 "zone_management": false, 00:20:33.108 "zone_append": false, 00:20:33.108 "compare": false, 00:20:33.108 "compare_and_write": false, 00:20:33.108 "abort": true, 00:20:33.108 "seek_hole": false, 00:20:33.108 "seek_data": false, 00:20:33.108 "copy": true, 00:20:33.108 "nvme_iov_md": false 00:20:33.108 }, 00:20:33.108 "memory_domains": [ 00:20:33.108 { 00:20:33.108 "dma_device_id": "system", 00:20:33.108 "dma_device_type": 1 00:20:33.108 }, 00:20:33.108 { 00:20:33.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.108 "dma_device_type": 2 00:20:33.108 } 00:20:33.108 ], 00:20:33.108 "driver_specific": {} 00:20:33.108 } 00:20:33.108 ] 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.108 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.366 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.366 "name": "Existed_Raid", 00:20:33.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.366 "strip_size_kb": 64, 00:20:33.366 "state": "configuring", 00:20:33.366 "raid_level": "concat", 00:20:33.366 "superblock": false, 00:20:33.366 "num_base_bdevs": 3, 00:20:33.366 "num_base_bdevs_discovered": 2, 00:20:33.366 "num_base_bdevs_operational": 3, 00:20:33.366 "base_bdevs_list": [ 00:20:33.366 { 00:20:33.366 "name": "BaseBdev1", 00:20:33.366 "uuid": "383e1402-c0cc-4b41-b8cc-f22d97780779", 00:20:33.366 "is_configured": true, 00:20:33.366 "data_offset": 0, 00:20:33.366 "data_size": 65536 00:20:33.366 }, 00:20:33.366 { 00:20:33.366 "name": "BaseBdev2", 00:20:33.366 "uuid": "70233b20-af5d-475b-b2e1-9c80587dd6a4", 00:20:33.366 "is_configured": true, 00:20:33.366 "data_offset": 0, 00:20:33.366 "data_size": 65536 00:20:33.366 }, 00:20:33.366 { 00:20:33.366 "name": "BaseBdev3", 00:20:33.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.366 "is_configured": false, 00:20:33.366 "data_offset": 0, 00:20:33.366 "data_size": 0 00:20:33.366 } 00:20:33.366 ] 00:20:33.366 }' 00:20:33.366 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.366 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:34.237 [2024-07-12 08:47:09.360690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:34.237 [2024-07-12 08:47:09.360940] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:34.237 [2024-07-12 08:47:09.361062] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:34.237 [2024-07-12 08:47:09.361241] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:34.237 [2024-07-12 08:47:09.361668] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:34.237 [2024-07-12 08:47:09.361791] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:34.237 [2024-07-12 08:47:09.362167] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.237 BaseBdev3 00:20:34.237 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:34.237 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:34.237 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:34.237 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:34.237 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:34.237 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:34.237 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:34.531 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:34.789 [ 00:20:34.789 { 00:20:34.789 "name": "BaseBdev3", 00:20:34.789 "aliases": [ 00:20:34.789 "099b8ad9-f2b6-4f6b-b87e-3bed0b5976c7" 00:20:34.789 ], 00:20:34.789 "product_name": "Malloc disk", 00:20:34.789 "block_size": 512, 00:20:34.789 "num_blocks": 65536, 00:20:34.789 "uuid": "099b8ad9-f2b6-4f6b-b87e-3bed0b5976c7", 00:20:34.789 "assigned_rate_limits": { 00:20:34.789 "rw_ios_per_sec": 0, 00:20:34.789 "rw_mbytes_per_sec": 0, 00:20:34.789 "r_mbytes_per_sec": 0, 00:20:34.789 "w_mbytes_per_sec": 0 00:20:34.789 }, 00:20:34.789 "claimed": true, 00:20:34.789 "claim_type": "exclusive_write", 00:20:34.789 "zoned": false, 00:20:34.789 "supported_io_types": { 00:20:34.789 "read": true, 00:20:34.789 "write": true, 00:20:34.789 "unmap": true, 00:20:34.789 "flush": true, 00:20:34.789 "reset": true, 00:20:34.789 "nvme_admin": false, 00:20:34.789 "nvme_io": false, 00:20:34.789 "nvme_io_md": false, 00:20:34.789 "write_zeroes": true, 00:20:34.789 "zcopy": true, 00:20:34.789 "get_zone_info": false, 00:20:34.789 "zone_management": false, 00:20:34.789 "zone_append": false, 00:20:34.789 "compare": false, 00:20:34.789 "compare_and_write": false, 00:20:34.789 "abort": true, 00:20:34.789 "seek_hole": false, 00:20:34.789 "seek_data": false, 00:20:34.789 "copy": true, 00:20:34.789 "nvme_iov_md": false 00:20:34.789 }, 00:20:34.789 "memory_domains": [ 00:20:34.789 { 00:20:34.789 "dma_device_id": "system", 00:20:34.789 "dma_device_type": 1 00:20:34.789 }, 00:20:34.789 { 00:20:34.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.789 "dma_device_type": 2 00:20:34.789 } 00:20:34.789 ], 00:20:34.789 "driver_specific": {} 00:20:34.789 } 00:20:34.789 ] 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.789 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.048 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.048 "name": "Existed_Raid", 00:20:35.048 "uuid": "7a9abbad-2e2b-4956-8bee-a04e89fe82e5", 00:20:35.048 "strip_size_kb": 64, 00:20:35.048 "state": "online", 00:20:35.048 "raid_level": "concat", 00:20:35.048 "superblock": false, 00:20:35.048 "num_base_bdevs": 3, 00:20:35.048 "num_base_bdevs_discovered": 3, 00:20:35.048 "num_base_bdevs_operational": 3, 00:20:35.048 "base_bdevs_list": [ 00:20:35.048 { 00:20:35.048 "name": "BaseBdev1", 00:20:35.048 "uuid": "383e1402-c0cc-4b41-b8cc-f22d97780779", 00:20:35.048 "is_configured": true, 00:20:35.048 "data_offset": 0, 00:20:35.048 "data_size": 65536 00:20:35.048 }, 00:20:35.048 { 00:20:35.048 "name": "BaseBdev2", 00:20:35.048 "uuid": "70233b20-af5d-475b-b2e1-9c80587dd6a4", 00:20:35.048 "is_configured": true, 00:20:35.048 "data_offset": 0, 00:20:35.048 "data_size": 65536 00:20:35.048 }, 00:20:35.048 { 00:20:35.048 "name": "BaseBdev3", 00:20:35.048 "uuid": "099b8ad9-f2b6-4f6b-b87e-3bed0b5976c7", 00:20:35.048 "is_configured": true, 00:20:35.048 "data_offset": 0, 00:20:35.048 "data_size": 65536 00:20:35.048 } 00:20:35.048 ] 00:20:35.048 }' 00:20:35.048 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.048 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:35.984 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:35.984 [2024-07-12 08:47:11.105718] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.984 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:35.984 "name": "Existed_Raid", 00:20:35.984 "aliases": [ 00:20:35.984 "7a9abbad-2e2b-4956-8bee-a04e89fe82e5" 00:20:35.984 ], 00:20:35.984 "product_name": "Raid Volume", 00:20:35.984 "block_size": 512, 00:20:35.984 "num_blocks": 196608, 00:20:35.984 "uuid": "7a9abbad-2e2b-4956-8bee-a04e89fe82e5", 00:20:35.984 "assigned_rate_limits": { 00:20:35.984 "rw_ios_per_sec": 0, 00:20:35.984 "rw_mbytes_per_sec": 0, 00:20:35.984 "r_mbytes_per_sec": 0, 00:20:35.984 "w_mbytes_per_sec": 0 00:20:35.984 }, 00:20:35.984 "claimed": false, 00:20:35.984 "zoned": false, 00:20:35.984 "supported_io_types": { 00:20:35.984 "read": true, 00:20:35.984 "write": true, 00:20:35.984 "unmap": true, 00:20:35.984 "flush": true, 00:20:35.984 "reset": true, 00:20:35.984 "nvme_admin": false, 00:20:35.984 "nvme_io": false, 00:20:35.984 "nvme_io_md": false, 00:20:35.984 "write_zeroes": true, 00:20:35.984 "zcopy": false, 00:20:35.984 "get_zone_info": false, 00:20:35.984 "zone_management": false, 00:20:35.984 "zone_append": false, 00:20:35.984 "compare": false, 00:20:35.984 "compare_and_write": false, 00:20:35.984 "abort": false, 00:20:35.984 "seek_hole": false, 00:20:35.984 "seek_data": false, 00:20:35.984 "copy": false, 00:20:35.984 "nvme_iov_md": false 00:20:35.984 }, 00:20:35.984 "memory_domains": [ 00:20:35.984 { 00:20:35.984 "dma_device_id": "system", 00:20:35.984 "dma_device_type": 1 00:20:35.984 }, 00:20:35.984 { 00:20:35.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.984 "dma_device_type": 2 00:20:35.984 }, 00:20:35.984 { 00:20:35.984 "dma_device_id": "system", 00:20:35.984 "dma_device_type": 1 00:20:35.984 }, 00:20:35.984 { 00:20:35.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.984 "dma_device_type": 2 00:20:35.984 }, 00:20:35.984 { 00:20:35.984 "dma_device_id": "system", 00:20:35.984 "dma_device_type": 1 00:20:35.984 }, 00:20:35.984 { 00:20:35.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.984 "dma_device_type": 2 00:20:35.984 } 00:20:35.984 ], 00:20:35.984 "driver_specific": { 00:20:35.984 "raid": { 00:20:35.984 "uuid": "7a9abbad-2e2b-4956-8bee-a04e89fe82e5", 00:20:35.984 "strip_size_kb": 64, 00:20:35.984 "state": "online", 00:20:35.984 "raid_level": "concat", 00:20:35.984 "superblock": false, 00:20:35.984 "num_base_bdevs": 3, 00:20:35.984 "num_base_bdevs_discovered": 3, 00:20:35.984 "num_base_bdevs_operational": 3, 00:20:35.984 "base_bdevs_list": [ 00:20:35.984 { 00:20:35.984 "name": "BaseBdev1", 00:20:35.984 "uuid": "383e1402-c0cc-4b41-b8cc-f22d97780779", 00:20:35.984 "is_configured": true, 00:20:35.984 "data_offset": 0, 00:20:35.984 "data_size": 65536 00:20:35.984 }, 00:20:35.984 { 00:20:35.984 "name": "BaseBdev2", 00:20:35.984 "uuid": "70233b20-af5d-475b-b2e1-9c80587dd6a4", 00:20:35.984 "is_configured": true, 00:20:35.984 "data_offset": 0, 00:20:35.984 "data_size": 65536 00:20:35.984 }, 00:20:35.984 { 00:20:35.984 "name": "BaseBdev3", 00:20:35.984 "uuid": "099b8ad9-f2b6-4f6b-b87e-3bed0b5976c7", 00:20:35.984 "is_configured": true, 00:20:35.984 "data_offset": 0, 00:20:35.984 "data_size": 65536 00:20:35.984 } 00:20:35.984 ] 00:20:35.984 } 00:20:35.984 } 00:20:35.984 }' 00:20:35.984 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:35.984 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:35.984 BaseBdev2 00:20:35.984 BaseBdev3' 00:20:35.984 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.984 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:36.242 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.500 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.500 "name": "BaseBdev1", 00:20:36.500 "aliases": [ 00:20:36.500 "383e1402-c0cc-4b41-b8cc-f22d97780779" 00:20:36.500 ], 00:20:36.500 "product_name": "Malloc disk", 00:20:36.500 "block_size": 512, 00:20:36.500 "num_blocks": 65536, 00:20:36.500 "uuid": "383e1402-c0cc-4b41-b8cc-f22d97780779", 00:20:36.500 "assigned_rate_limits": { 00:20:36.500 "rw_ios_per_sec": 0, 00:20:36.500 "rw_mbytes_per_sec": 0, 00:20:36.500 "r_mbytes_per_sec": 0, 00:20:36.500 "w_mbytes_per_sec": 0 00:20:36.500 }, 00:20:36.500 "claimed": true, 00:20:36.500 "claim_type": "exclusive_write", 00:20:36.500 "zoned": false, 00:20:36.500 "supported_io_types": { 00:20:36.500 "read": true, 00:20:36.500 "write": true, 00:20:36.500 "unmap": true, 00:20:36.500 "flush": true, 00:20:36.500 "reset": true, 00:20:36.500 "nvme_admin": false, 00:20:36.500 "nvme_io": false, 00:20:36.500 "nvme_io_md": false, 00:20:36.500 "write_zeroes": true, 00:20:36.500 "zcopy": true, 00:20:36.500 "get_zone_info": false, 00:20:36.500 "zone_management": false, 00:20:36.500 "zone_append": false, 00:20:36.500 "compare": false, 00:20:36.500 "compare_and_write": false, 00:20:36.500 "abort": true, 00:20:36.500 "seek_hole": false, 00:20:36.500 "seek_data": false, 00:20:36.500 "copy": true, 00:20:36.500 "nvme_iov_md": false 00:20:36.500 }, 00:20:36.500 "memory_domains": [ 00:20:36.500 { 00:20:36.500 "dma_device_id": "system", 00:20:36.500 "dma_device_type": 1 00:20:36.500 }, 00:20:36.500 { 00:20:36.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.500 "dma_device_type": 2 00:20:36.500 } 00:20:36.500 ], 00:20:36.500 "driver_specific": {} 00:20:36.500 }' 00:20:36.500 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.500 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.500 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.500 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.500 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:36.759 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:37.325 "name": "BaseBdev2", 00:20:37.325 "aliases": [ 00:20:37.325 "70233b20-af5d-475b-b2e1-9c80587dd6a4" 00:20:37.325 ], 00:20:37.325 "product_name": "Malloc disk", 00:20:37.325 "block_size": 512, 00:20:37.325 "num_blocks": 65536, 00:20:37.325 "uuid": "70233b20-af5d-475b-b2e1-9c80587dd6a4", 00:20:37.325 "assigned_rate_limits": { 00:20:37.325 "rw_ios_per_sec": 0, 00:20:37.325 "rw_mbytes_per_sec": 0, 00:20:37.325 "r_mbytes_per_sec": 0, 00:20:37.325 "w_mbytes_per_sec": 0 00:20:37.325 }, 00:20:37.325 "claimed": true, 00:20:37.325 "claim_type": "exclusive_write", 00:20:37.325 "zoned": false, 00:20:37.325 "supported_io_types": { 00:20:37.325 "read": true, 00:20:37.325 "write": true, 00:20:37.325 "unmap": true, 00:20:37.325 "flush": true, 00:20:37.325 "reset": true, 00:20:37.325 "nvme_admin": false, 00:20:37.325 "nvme_io": false, 00:20:37.325 "nvme_io_md": false, 00:20:37.325 "write_zeroes": true, 00:20:37.325 "zcopy": true, 00:20:37.325 "get_zone_info": false, 00:20:37.325 "zone_management": false, 00:20:37.325 "zone_append": false, 00:20:37.325 "compare": false, 00:20:37.325 "compare_and_write": false, 00:20:37.325 "abort": true, 00:20:37.325 "seek_hole": false, 00:20:37.325 "seek_data": false, 00:20:37.325 "copy": true, 00:20:37.325 "nvme_iov_md": false 00:20:37.325 }, 00:20:37.325 "memory_domains": [ 00:20:37.325 { 00:20:37.325 "dma_device_id": "system", 00:20:37.325 "dma_device_type": 1 00:20:37.325 }, 00:20:37.325 { 00:20:37.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.325 "dma_device_type": 2 00:20:37.325 } 00:20:37.325 ], 00:20:37.325 "driver_specific": {} 00:20:37.325 }' 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.325 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.583 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:37.583 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.583 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.583 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:37.583 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:37.583 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:37.583 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:37.841 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:37.841 "name": "BaseBdev3", 00:20:37.841 "aliases": [ 00:20:37.841 "099b8ad9-f2b6-4f6b-b87e-3bed0b5976c7" 00:20:37.841 ], 00:20:37.841 "product_name": "Malloc disk", 00:20:37.841 "block_size": 512, 00:20:37.841 "num_blocks": 65536, 00:20:37.841 "uuid": "099b8ad9-f2b6-4f6b-b87e-3bed0b5976c7", 00:20:37.841 "assigned_rate_limits": { 00:20:37.841 "rw_ios_per_sec": 0, 00:20:37.841 "rw_mbytes_per_sec": 0, 00:20:37.841 "r_mbytes_per_sec": 0, 00:20:37.841 "w_mbytes_per_sec": 0 00:20:37.841 }, 00:20:37.841 "claimed": true, 00:20:37.841 "claim_type": "exclusive_write", 00:20:37.841 "zoned": false, 00:20:37.841 "supported_io_types": { 00:20:37.841 "read": true, 00:20:37.841 "write": true, 00:20:37.841 "unmap": true, 00:20:37.841 "flush": true, 00:20:37.841 "reset": true, 00:20:37.841 "nvme_admin": false, 00:20:37.841 "nvme_io": false, 00:20:37.841 "nvme_io_md": false, 00:20:37.841 "write_zeroes": true, 00:20:37.841 "zcopy": true, 00:20:37.841 "get_zone_info": false, 00:20:37.841 "zone_management": false, 00:20:37.841 "zone_append": false, 00:20:37.841 "compare": false, 00:20:37.841 "compare_and_write": false, 00:20:37.841 "abort": true, 00:20:37.841 "seek_hole": false, 00:20:37.841 "seek_data": false, 00:20:37.841 "copy": true, 00:20:37.841 "nvme_iov_md": false 00:20:37.841 }, 00:20:37.841 "memory_domains": [ 00:20:37.841 { 00:20:37.841 "dma_device_id": "system", 00:20:37.841 "dma_device_type": 1 00:20:37.841 }, 00:20:37.841 { 00:20:37.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.841 "dma_device_type": 2 00:20:37.841 } 00:20:37.841 ], 00:20:37.841 "driver_specific": {} 00:20:37.841 }' 00:20:37.841 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.842 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.100 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:38.100 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.100 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.100 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:38.100 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.100 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.360 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:38.360 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.360 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.360 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:38.360 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:38.618 [2024-07-12 08:47:13.754124] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:38.618 [2024-07-12 08:47:13.754349] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:38.618 [2024-07-12 08:47:13.754538] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.876 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.134 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.134 "name": "Existed_Raid", 00:20:39.134 "uuid": "7a9abbad-2e2b-4956-8bee-a04e89fe82e5", 00:20:39.134 "strip_size_kb": 64, 00:20:39.134 "state": "offline", 00:20:39.134 "raid_level": "concat", 00:20:39.134 "superblock": false, 00:20:39.134 "num_base_bdevs": 3, 00:20:39.134 "num_base_bdevs_discovered": 2, 00:20:39.134 "num_base_bdevs_operational": 2, 00:20:39.134 "base_bdevs_list": [ 00:20:39.134 { 00:20:39.134 "name": null, 00:20:39.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.134 "is_configured": false, 00:20:39.134 "data_offset": 0, 00:20:39.134 "data_size": 65536 00:20:39.134 }, 00:20:39.134 { 00:20:39.134 "name": "BaseBdev2", 00:20:39.134 "uuid": "70233b20-af5d-475b-b2e1-9c80587dd6a4", 00:20:39.134 "is_configured": true, 00:20:39.134 "data_offset": 0, 00:20:39.134 "data_size": 65536 00:20:39.134 }, 00:20:39.134 { 00:20:39.134 "name": "BaseBdev3", 00:20:39.134 "uuid": "099b8ad9-f2b6-4f6b-b87e-3bed0b5976c7", 00:20:39.134 "is_configured": true, 00:20:39.134 "data_offset": 0, 00:20:39.134 "data_size": 65536 00:20:39.134 } 00:20:39.134 ] 00:20:39.134 }' 00:20:39.134 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.134 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.700 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:39.700 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:39.700 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.700 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:40.266 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:40.266 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:40.266 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:40.266 [2024-07-12 08:47:15.423630] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:40.524 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:40.524 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:40.524 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.524 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:40.782 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:40.782 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:40.782 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:41.039 [2024-07-12 08:47:16.115543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:41.039 [2024-07-12 08:47:16.115668] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:41.039 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:41.039 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:41.039 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.039 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:41.605 BaseBdev2 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:41.605 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:41.864 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:42.120 [ 00:20:42.120 { 00:20:42.120 "name": "BaseBdev2", 00:20:42.120 "aliases": [ 00:20:42.120 "1b674778-2fac-4e2f-83f6-6df34d0407a7" 00:20:42.120 ], 00:20:42.120 "product_name": "Malloc disk", 00:20:42.120 "block_size": 512, 00:20:42.120 "num_blocks": 65536, 00:20:42.120 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:42.120 "assigned_rate_limits": { 00:20:42.120 "rw_ios_per_sec": 0, 00:20:42.120 "rw_mbytes_per_sec": 0, 00:20:42.120 "r_mbytes_per_sec": 0, 00:20:42.120 "w_mbytes_per_sec": 0 00:20:42.120 }, 00:20:42.120 "claimed": false, 00:20:42.120 "zoned": false, 00:20:42.120 "supported_io_types": { 00:20:42.120 "read": true, 00:20:42.120 "write": true, 00:20:42.120 "unmap": true, 00:20:42.120 "flush": true, 00:20:42.120 "reset": true, 00:20:42.120 "nvme_admin": false, 00:20:42.120 "nvme_io": false, 00:20:42.120 "nvme_io_md": false, 00:20:42.120 "write_zeroes": true, 00:20:42.120 "zcopy": true, 00:20:42.120 "get_zone_info": false, 00:20:42.120 "zone_management": false, 00:20:42.120 "zone_append": false, 00:20:42.120 "compare": false, 00:20:42.120 "compare_and_write": false, 00:20:42.120 "abort": true, 00:20:42.120 "seek_hole": false, 00:20:42.120 "seek_data": false, 00:20:42.120 "copy": true, 00:20:42.120 "nvme_iov_md": false 00:20:42.120 }, 00:20:42.120 "memory_domains": [ 00:20:42.120 { 00:20:42.120 "dma_device_id": "system", 00:20:42.120 "dma_device_type": 1 00:20:42.120 }, 00:20:42.120 { 00:20:42.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.120 "dma_device_type": 2 00:20:42.120 } 00:20:42.120 ], 00:20:42.120 "driver_specific": {} 00:20:42.120 } 00:20:42.120 ] 00:20:42.120 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:42.120 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:42.120 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:42.120 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:42.378 BaseBdev3 00:20:42.378 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:42.378 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:42.378 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:42.378 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:42.378 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:42.378 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:42.378 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:42.635 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:42.893 [ 00:20:42.893 { 00:20:42.894 "name": "BaseBdev3", 00:20:42.894 "aliases": [ 00:20:42.894 "98c6b18b-f003-4715-88bf-750e6bd29aea" 00:20:42.894 ], 00:20:42.894 "product_name": "Malloc disk", 00:20:42.894 "block_size": 512, 00:20:42.894 "num_blocks": 65536, 00:20:42.894 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:42.894 "assigned_rate_limits": { 00:20:42.894 "rw_ios_per_sec": 0, 00:20:42.894 "rw_mbytes_per_sec": 0, 00:20:42.894 "r_mbytes_per_sec": 0, 00:20:42.894 "w_mbytes_per_sec": 0 00:20:42.894 }, 00:20:42.894 "claimed": false, 00:20:42.894 "zoned": false, 00:20:42.894 "supported_io_types": { 00:20:42.894 "read": true, 00:20:42.894 "write": true, 00:20:42.894 "unmap": true, 00:20:42.894 "flush": true, 00:20:42.894 "reset": true, 00:20:42.894 "nvme_admin": false, 00:20:42.894 "nvme_io": false, 00:20:42.894 "nvme_io_md": false, 00:20:42.894 "write_zeroes": true, 00:20:42.894 "zcopy": true, 00:20:42.894 "get_zone_info": false, 00:20:42.894 "zone_management": false, 00:20:42.894 "zone_append": false, 00:20:42.894 "compare": false, 00:20:42.894 "compare_and_write": false, 00:20:42.894 "abort": true, 00:20:42.894 "seek_hole": false, 00:20:42.894 "seek_data": false, 00:20:42.894 "copy": true, 00:20:42.894 "nvme_iov_md": false 00:20:42.894 }, 00:20:42.894 "memory_domains": [ 00:20:42.894 { 00:20:42.894 "dma_device_id": "system", 00:20:42.894 "dma_device_type": 1 00:20:42.894 }, 00:20:42.894 { 00:20:42.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.894 "dma_device_type": 2 00:20:42.894 } 00:20:42.894 ], 00:20:42.894 "driver_specific": {} 00:20:42.894 } 00:20:42.894 ] 00:20:42.894 08:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:42.894 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:42.894 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:42.894 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:43.459 [2024-07-12 08:47:18.350419] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.459 [2024-07-12 08:47:18.350682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.459 [2024-07-12 08:47:18.350855] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.459 [2024-07-12 08:47:18.353232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.459 "name": "Existed_Raid", 00:20:43.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.459 "strip_size_kb": 64, 00:20:43.459 "state": "configuring", 00:20:43.459 "raid_level": "concat", 00:20:43.459 "superblock": false, 00:20:43.459 "num_base_bdevs": 3, 00:20:43.459 "num_base_bdevs_discovered": 2, 00:20:43.459 "num_base_bdevs_operational": 3, 00:20:43.459 "base_bdevs_list": [ 00:20:43.459 { 00:20:43.459 "name": "BaseBdev1", 00:20:43.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.459 "is_configured": false, 00:20:43.459 "data_offset": 0, 00:20:43.459 "data_size": 0 00:20:43.459 }, 00:20:43.459 { 00:20:43.459 "name": "BaseBdev2", 00:20:43.459 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:43.459 "is_configured": true, 00:20:43.459 "data_offset": 0, 00:20:43.459 "data_size": 65536 00:20:43.459 }, 00:20:43.459 { 00:20:43.459 "name": "BaseBdev3", 00:20:43.459 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:43.459 "is_configured": true, 00:20:43.459 "data_offset": 0, 00:20:43.459 "data_size": 65536 00:20:43.459 } 00:20:43.459 ] 00:20:43.459 }' 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.459 08:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:44.394 [2024-07-12 08:47:19.506678] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.394 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.652 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.652 "name": "Existed_Raid", 00:20:44.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.652 "strip_size_kb": 64, 00:20:44.652 "state": "configuring", 00:20:44.652 "raid_level": "concat", 00:20:44.652 "superblock": false, 00:20:44.652 "num_base_bdevs": 3, 00:20:44.652 "num_base_bdevs_discovered": 1, 00:20:44.652 "num_base_bdevs_operational": 3, 00:20:44.652 "base_bdevs_list": [ 00:20:44.652 { 00:20:44.652 "name": "BaseBdev1", 00:20:44.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.652 "is_configured": false, 00:20:44.652 "data_offset": 0, 00:20:44.652 "data_size": 0 00:20:44.652 }, 00:20:44.652 { 00:20:44.652 "name": null, 00:20:44.652 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:44.652 "is_configured": false, 00:20:44.652 "data_offset": 0, 00:20:44.652 "data_size": 65536 00:20:44.652 }, 00:20:44.652 { 00:20:44.652 "name": "BaseBdev3", 00:20:44.652 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:44.652 "is_configured": true, 00:20:44.652 "data_offset": 0, 00:20:44.652 "data_size": 65536 00:20:44.652 } 00:20:44.652 ] 00:20:44.652 }' 00:20:44.652 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.653 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.588 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.588 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:45.901 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:45.901 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:46.159 [2024-07-12 08:47:21.129995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.159 BaseBdev1 00:20:46.159 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:46.159 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:46.159 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:46.159 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:46.159 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:46.159 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:46.159 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:46.443 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:46.718 [ 00:20:46.718 { 00:20:46.718 "name": "BaseBdev1", 00:20:46.718 "aliases": [ 00:20:46.718 "20afab2c-4430-48f4-b0f5-4a043c104a45" 00:20:46.718 ], 00:20:46.718 "product_name": "Malloc disk", 00:20:46.718 "block_size": 512, 00:20:46.718 "num_blocks": 65536, 00:20:46.718 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:46.718 "assigned_rate_limits": { 00:20:46.718 "rw_ios_per_sec": 0, 00:20:46.718 "rw_mbytes_per_sec": 0, 00:20:46.718 "r_mbytes_per_sec": 0, 00:20:46.718 "w_mbytes_per_sec": 0 00:20:46.718 }, 00:20:46.718 "claimed": true, 00:20:46.718 "claim_type": "exclusive_write", 00:20:46.718 "zoned": false, 00:20:46.718 "supported_io_types": { 00:20:46.718 "read": true, 00:20:46.718 "write": true, 00:20:46.718 "unmap": true, 00:20:46.718 "flush": true, 00:20:46.718 "reset": true, 00:20:46.718 "nvme_admin": false, 00:20:46.718 "nvme_io": false, 00:20:46.718 "nvme_io_md": false, 00:20:46.718 "write_zeroes": true, 00:20:46.718 "zcopy": true, 00:20:46.718 "get_zone_info": false, 00:20:46.718 "zone_management": false, 00:20:46.718 "zone_append": false, 00:20:46.718 "compare": false, 00:20:46.718 "compare_and_write": false, 00:20:46.718 "abort": true, 00:20:46.718 "seek_hole": false, 00:20:46.718 "seek_data": false, 00:20:46.718 "copy": true, 00:20:46.718 "nvme_iov_md": false 00:20:46.718 }, 00:20:46.718 "memory_domains": [ 00:20:46.718 { 00:20:46.718 "dma_device_id": "system", 00:20:46.718 "dma_device_type": 1 00:20:46.718 }, 00:20:46.718 { 00:20:46.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.718 "dma_device_type": 2 00:20:46.718 } 00:20:46.718 ], 00:20:46.718 "driver_specific": {} 00:20:46.718 } 00:20:46.718 ] 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.718 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.719 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.719 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.719 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.719 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.977 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.977 "name": "Existed_Raid", 00:20:46.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.977 "strip_size_kb": 64, 00:20:46.977 "state": "configuring", 00:20:46.977 "raid_level": "concat", 00:20:46.977 "superblock": false, 00:20:46.977 "num_base_bdevs": 3, 00:20:46.977 "num_base_bdevs_discovered": 2, 00:20:46.977 "num_base_bdevs_operational": 3, 00:20:46.977 "base_bdevs_list": [ 00:20:46.977 { 00:20:46.977 "name": "BaseBdev1", 00:20:46.977 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:46.977 "is_configured": true, 00:20:46.977 "data_offset": 0, 00:20:46.977 "data_size": 65536 00:20:46.977 }, 00:20:46.977 { 00:20:46.977 "name": null, 00:20:46.977 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:46.977 "is_configured": false, 00:20:46.977 "data_offset": 0, 00:20:46.977 "data_size": 65536 00:20:46.977 }, 00:20:46.977 { 00:20:46.977 "name": "BaseBdev3", 00:20:46.977 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:46.977 "is_configured": true, 00:20:46.977 "data_offset": 0, 00:20:46.977 "data_size": 65536 00:20:46.977 } 00:20:46.977 ] 00:20:46.977 }' 00:20:46.977 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.977 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.937 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.937 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:47.937 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:47.937 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:48.271 [2024-07-12 08:47:23.310607] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.271 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.548 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.548 "name": "Existed_Raid", 00:20:48.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.548 "strip_size_kb": 64, 00:20:48.548 "state": "configuring", 00:20:48.548 "raid_level": "concat", 00:20:48.548 "superblock": false, 00:20:48.548 "num_base_bdevs": 3, 00:20:48.548 "num_base_bdevs_discovered": 1, 00:20:48.548 "num_base_bdevs_operational": 3, 00:20:48.548 "base_bdevs_list": [ 00:20:48.548 { 00:20:48.548 "name": "BaseBdev1", 00:20:48.548 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:48.548 "is_configured": true, 00:20:48.548 "data_offset": 0, 00:20:48.548 "data_size": 65536 00:20:48.548 }, 00:20:48.548 { 00:20:48.548 "name": null, 00:20:48.548 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:48.548 "is_configured": false, 00:20:48.548 "data_offset": 0, 00:20:48.548 "data_size": 65536 00:20:48.548 }, 00:20:48.548 { 00:20:48.548 "name": null, 00:20:48.548 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:48.548 "is_configured": false, 00:20:48.548 "data_offset": 0, 00:20:48.548 "data_size": 65536 00:20:48.548 } 00:20:48.548 ] 00:20:48.548 }' 00:20:48.548 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.548 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.481 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.481 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:49.481 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:49.481 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:49.739 [2024-07-12 08:47:24.927195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.998 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.998 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.998 "name": "Existed_Raid", 00:20:49.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.998 "strip_size_kb": 64, 00:20:49.998 "state": "configuring", 00:20:49.998 "raid_level": "concat", 00:20:49.998 "superblock": false, 00:20:49.998 "num_base_bdevs": 3, 00:20:49.998 "num_base_bdevs_discovered": 2, 00:20:49.998 "num_base_bdevs_operational": 3, 00:20:49.998 "base_bdevs_list": [ 00:20:49.998 { 00:20:49.998 "name": "BaseBdev1", 00:20:49.998 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:49.998 "is_configured": true, 00:20:49.998 "data_offset": 0, 00:20:49.998 "data_size": 65536 00:20:49.998 }, 00:20:49.998 { 00:20:49.998 "name": null, 00:20:49.998 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:49.998 "is_configured": false, 00:20:49.998 "data_offset": 0, 00:20:49.998 "data_size": 65536 00:20:49.998 }, 00:20:49.998 { 00:20:49.998 "name": "BaseBdev3", 00:20:49.998 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:49.998 "is_configured": true, 00:20:49.998 "data_offset": 0, 00:20:49.998 "data_size": 65536 00:20:49.998 } 00:20:49.998 ] 00:20:49.998 }' 00:20:49.998 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.998 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.933 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.933 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:51.192 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:51.192 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:51.449 [2024-07-12 08:47:26.403737] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.450 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.707 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.707 "name": "Existed_Raid", 00:20:51.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.707 "strip_size_kb": 64, 00:20:51.707 "state": "configuring", 00:20:51.707 "raid_level": "concat", 00:20:51.707 "superblock": false, 00:20:51.707 "num_base_bdevs": 3, 00:20:51.707 "num_base_bdevs_discovered": 1, 00:20:51.707 "num_base_bdevs_operational": 3, 00:20:51.707 "base_bdevs_list": [ 00:20:51.707 { 00:20:51.707 "name": null, 00:20:51.707 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:51.707 "is_configured": false, 00:20:51.707 "data_offset": 0, 00:20:51.707 "data_size": 65536 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "name": null, 00:20:51.707 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:51.707 "is_configured": false, 00:20:51.707 "data_offset": 0, 00:20:51.707 "data_size": 65536 00:20:51.707 }, 00:20:51.707 { 00:20:51.707 "name": "BaseBdev3", 00:20:51.707 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:51.707 "is_configured": true, 00:20:51.707 "data_offset": 0, 00:20:51.707 "data_size": 65536 00:20:51.707 } 00:20:51.707 ] 00:20:51.707 }' 00:20:51.707 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.707 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.640 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.640 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:52.640 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:52.640 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:52.898 [2024-07-12 08:47:28.073524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.898 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.156 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.156 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.461 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.461 "name": "Existed_Raid", 00:20:53.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.461 "strip_size_kb": 64, 00:20:53.461 "state": "configuring", 00:20:53.461 "raid_level": "concat", 00:20:53.461 "superblock": false, 00:20:53.461 "num_base_bdevs": 3, 00:20:53.461 "num_base_bdevs_discovered": 2, 00:20:53.461 "num_base_bdevs_operational": 3, 00:20:53.461 "base_bdevs_list": [ 00:20:53.461 { 00:20:53.461 "name": null, 00:20:53.461 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:53.461 "is_configured": false, 00:20:53.461 "data_offset": 0, 00:20:53.461 "data_size": 65536 00:20:53.461 }, 00:20:53.461 { 00:20:53.461 "name": "BaseBdev2", 00:20:53.461 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:53.461 "is_configured": true, 00:20:53.461 "data_offset": 0, 00:20:53.461 "data_size": 65536 00:20:53.461 }, 00:20:53.461 { 00:20:53.461 "name": "BaseBdev3", 00:20:53.461 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:53.461 "is_configured": true, 00:20:53.461 "data_offset": 0, 00:20:53.461 "data_size": 65536 00:20:53.461 } 00:20:53.461 ] 00:20:53.461 }' 00:20:53.461 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.461 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.025 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.025 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:54.282 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:54.282 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.282 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:54.540 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 20afab2c-4430-48f4-b0f5-4a043c104a45 00:20:55.105 [2024-07-12 08:47:30.018165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:55.105 [2024-07-12 08:47:30.018379] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:55.105 [2024-07-12 08:47:30.018421] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:55.105 [2024-07-12 08:47:30.018641] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:55.105 [2024-07-12 08:47:30.019105] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:55.105 [2024-07-12 08:47:30.019239] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:20:55.105 [2024-07-12 08:47:30.019584] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.105 NewBaseBdev 00:20:55.105 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:55.105 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:55.105 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:55.105 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:55.105 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:55.105 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:55.105 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:55.363 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:55.363 [ 00:20:55.363 { 00:20:55.363 "name": "NewBaseBdev", 00:20:55.363 "aliases": [ 00:20:55.363 "20afab2c-4430-48f4-b0f5-4a043c104a45" 00:20:55.363 ], 00:20:55.363 "product_name": "Malloc disk", 00:20:55.363 "block_size": 512, 00:20:55.363 "num_blocks": 65536, 00:20:55.363 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:55.363 "assigned_rate_limits": { 00:20:55.363 "rw_ios_per_sec": 0, 00:20:55.363 "rw_mbytes_per_sec": 0, 00:20:55.363 "r_mbytes_per_sec": 0, 00:20:55.363 "w_mbytes_per_sec": 0 00:20:55.363 }, 00:20:55.363 "claimed": true, 00:20:55.363 "claim_type": "exclusive_write", 00:20:55.363 "zoned": false, 00:20:55.363 "supported_io_types": { 00:20:55.363 "read": true, 00:20:55.363 "write": true, 00:20:55.363 "unmap": true, 00:20:55.363 "flush": true, 00:20:55.363 "reset": true, 00:20:55.363 "nvme_admin": false, 00:20:55.363 "nvme_io": false, 00:20:55.363 "nvme_io_md": false, 00:20:55.363 "write_zeroes": true, 00:20:55.363 "zcopy": true, 00:20:55.363 "get_zone_info": false, 00:20:55.363 "zone_management": false, 00:20:55.363 "zone_append": false, 00:20:55.363 "compare": false, 00:20:55.363 "compare_and_write": false, 00:20:55.363 "abort": true, 00:20:55.363 "seek_hole": false, 00:20:55.363 "seek_data": false, 00:20:55.363 "copy": true, 00:20:55.363 "nvme_iov_md": false 00:20:55.363 }, 00:20:55.363 "memory_domains": [ 00:20:55.363 { 00:20:55.363 "dma_device_id": "system", 00:20:55.363 "dma_device_type": 1 00:20:55.363 }, 00:20:55.363 { 00:20:55.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.363 "dma_device_type": 2 00:20:55.363 } 00:20:55.363 ], 00:20:55.363 "driver_specific": {} 00:20:55.363 } 00:20:55.363 ] 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.690 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.690 "name": "Existed_Raid", 00:20:55.690 "uuid": "939ad22d-9c96-4d62-bb8a-ddaa3e99e0e9", 00:20:55.690 "strip_size_kb": 64, 00:20:55.690 "state": "online", 00:20:55.690 "raid_level": "concat", 00:20:55.690 "superblock": false, 00:20:55.690 "num_base_bdevs": 3, 00:20:55.690 "num_base_bdevs_discovered": 3, 00:20:55.690 "num_base_bdevs_operational": 3, 00:20:55.690 "base_bdevs_list": [ 00:20:55.690 { 00:20:55.690 "name": "NewBaseBdev", 00:20:55.690 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:55.690 "is_configured": true, 00:20:55.690 "data_offset": 0, 00:20:55.690 "data_size": 65536 00:20:55.690 }, 00:20:55.690 { 00:20:55.690 "name": "BaseBdev2", 00:20:55.690 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:55.690 "is_configured": true, 00:20:55.690 "data_offset": 0, 00:20:55.690 "data_size": 65536 00:20:55.690 }, 00:20:55.690 { 00:20:55.690 "name": "BaseBdev3", 00:20:55.691 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:55.691 "is_configured": true, 00:20:55.691 "data_offset": 0, 00:20:55.691 "data_size": 65536 00:20:55.691 } 00:20:55.691 ] 00:20:55.691 }' 00:20:55.691 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.691 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.637 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:56.638 [2024-07-12 08:47:31.759038] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:56.638 "name": "Existed_Raid", 00:20:56.638 "aliases": [ 00:20:56.638 "939ad22d-9c96-4d62-bb8a-ddaa3e99e0e9" 00:20:56.638 ], 00:20:56.638 "product_name": "Raid Volume", 00:20:56.638 "block_size": 512, 00:20:56.638 "num_blocks": 196608, 00:20:56.638 "uuid": "939ad22d-9c96-4d62-bb8a-ddaa3e99e0e9", 00:20:56.638 "assigned_rate_limits": { 00:20:56.638 "rw_ios_per_sec": 0, 00:20:56.638 "rw_mbytes_per_sec": 0, 00:20:56.638 "r_mbytes_per_sec": 0, 00:20:56.638 "w_mbytes_per_sec": 0 00:20:56.638 }, 00:20:56.638 "claimed": false, 00:20:56.638 "zoned": false, 00:20:56.638 "supported_io_types": { 00:20:56.638 "read": true, 00:20:56.638 "write": true, 00:20:56.638 "unmap": true, 00:20:56.638 "flush": true, 00:20:56.638 "reset": true, 00:20:56.638 "nvme_admin": false, 00:20:56.638 "nvme_io": false, 00:20:56.638 "nvme_io_md": false, 00:20:56.638 "write_zeroes": true, 00:20:56.638 "zcopy": false, 00:20:56.638 "get_zone_info": false, 00:20:56.638 "zone_management": false, 00:20:56.638 "zone_append": false, 00:20:56.638 "compare": false, 00:20:56.638 "compare_and_write": false, 00:20:56.638 "abort": false, 00:20:56.638 "seek_hole": false, 00:20:56.638 "seek_data": false, 00:20:56.638 "copy": false, 00:20:56.638 "nvme_iov_md": false 00:20:56.638 }, 00:20:56.638 "memory_domains": [ 00:20:56.638 { 00:20:56.638 "dma_device_id": "system", 00:20:56.638 "dma_device_type": 1 00:20:56.638 }, 00:20:56.638 { 00:20:56.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.638 "dma_device_type": 2 00:20:56.638 }, 00:20:56.638 { 00:20:56.638 "dma_device_id": "system", 00:20:56.638 "dma_device_type": 1 00:20:56.638 }, 00:20:56.638 { 00:20:56.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.638 "dma_device_type": 2 00:20:56.638 }, 00:20:56.638 { 00:20:56.638 "dma_device_id": "system", 00:20:56.638 "dma_device_type": 1 00:20:56.638 }, 00:20:56.638 { 00:20:56.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.638 "dma_device_type": 2 00:20:56.638 } 00:20:56.638 ], 00:20:56.638 "driver_specific": { 00:20:56.638 "raid": { 00:20:56.638 "uuid": "939ad22d-9c96-4d62-bb8a-ddaa3e99e0e9", 00:20:56.638 "strip_size_kb": 64, 00:20:56.638 "state": "online", 00:20:56.638 "raid_level": "concat", 00:20:56.638 "superblock": false, 00:20:56.638 "num_base_bdevs": 3, 00:20:56.638 "num_base_bdevs_discovered": 3, 00:20:56.638 "num_base_bdevs_operational": 3, 00:20:56.638 "base_bdevs_list": [ 00:20:56.638 { 00:20:56.638 "name": "NewBaseBdev", 00:20:56.638 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:56.638 "is_configured": true, 00:20:56.638 "data_offset": 0, 00:20:56.638 "data_size": 65536 00:20:56.638 }, 00:20:56.638 { 00:20:56.638 "name": "BaseBdev2", 00:20:56.638 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:56.638 "is_configured": true, 00:20:56.638 "data_offset": 0, 00:20:56.638 "data_size": 65536 00:20:56.638 }, 00:20:56.638 { 00:20:56.638 "name": "BaseBdev3", 00:20:56.638 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:56.638 "is_configured": true, 00:20:56.638 "data_offset": 0, 00:20:56.638 "data_size": 65536 00:20:56.638 } 00:20:56.638 ] 00:20:56.638 } 00:20:56.638 } 00:20:56.638 }' 00:20:56.638 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:56.896 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:56.896 BaseBdev2 00:20:56.896 BaseBdev3' 00:20:56.896 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.896 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:56.896 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.153 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.153 "name": "NewBaseBdev", 00:20:57.153 "aliases": [ 00:20:57.153 "20afab2c-4430-48f4-b0f5-4a043c104a45" 00:20:57.153 ], 00:20:57.153 "product_name": "Malloc disk", 00:20:57.153 "block_size": 512, 00:20:57.153 "num_blocks": 65536, 00:20:57.154 "uuid": "20afab2c-4430-48f4-b0f5-4a043c104a45", 00:20:57.154 "assigned_rate_limits": { 00:20:57.154 "rw_ios_per_sec": 0, 00:20:57.154 "rw_mbytes_per_sec": 0, 00:20:57.154 "r_mbytes_per_sec": 0, 00:20:57.154 "w_mbytes_per_sec": 0 00:20:57.154 }, 00:20:57.154 "claimed": true, 00:20:57.154 "claim_type": "exclusive_write", 00:20:57.154 "zoned": false, 00:20:57.154 "supported_io_types": { 00:20:57.154 "read": true, 00:20:57.154 "write": true, 00:20:57.154 "unmap": true, 00:20:57.154 "flush": true, 00:20:57.154 "reset": true, 00:20:57.154 "nvme_admin": false, 00:20:57.154 "nvme_io": false, 00:20:57.154 "nvme_io_md": false, 00:20:57.154 "write_zeroes": true, 00:20:57.154 "zcopy": true, 00:20:57.154 "get_zone_info": false, 00:20:57.154 "zone_management": false, 00:20:57.154 "zone_append": false, 00:20:57.154 "compare": false, 00:20:57.154 "compare_and_write": false, 00:20:57.154 "abort": true, 00:20:57.154 "seek_hole": false, 00:20:57.154 "seek_data": false, 00:20:57.154 "copy": true, 00:20:57.154 "nvme_iov_md": false 00:20:57.154 }, 00:20:57.154 "memory_domains": [ 00:20:57.154 { 00:20:57.154 "dma_device_id": "system", 00:20:57.154 "dma_device_type": 1 00:20:57.154 }, 00:20:57.154 { 00:20:57.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.154 "dma_device_type": 2 00:20:57.154 } 00:20:57.154 ], 00:20:57.154 "driver_specific": {} 00:20:57.154 }' 00:20:57.154 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.154 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.154 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.154 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.154 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.411 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.411 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.411 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.411 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.411 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.411 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.668 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.668 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.668 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:57.668 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.926 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.926 "name": "BaseBdev2", 00:20:57.926 "aliases": [ 00:20:57.926 "1b674778-2fac-4e2f-83f6-6df34d0407a7" 00:20:57.926 ], 00:20:57.926 "product_name": "Malloc disk", 00:20:57.926 "block_size": 512, 00:20:57.926 "num_blocks": 65536, 00:20:57.926 "uuid": "1b674778-2fac-4e2f-83f6-6df34d0407a7", 00:20:57.926 "assigned_rate_limits": { 00:20:57.926 "rw_ios_per_sec": 0, 00:20:57.926 "rw_mbytes_per_sec": 0, 00:20:57.926 "r_mbytes_per_sec": 0, 00:20:57.926 "w_mbytes_per_sec": 0 00:20:57.926 }, 00:20:57.926 "claimed": true, 00:20:57.926 "claim_type": "exclusive_write", 00:20:57.926 "zoned": false, 00:20:57.926 "supported_io_types": { 00:20:57.926 "read": true, 00:20:57.926 "write": true, 00:20:57.926 "unmap": true, 00:20:57.926 "flush": true, 00:20:57.926 "reset": true, 00:20:57.926 "nvme_admin": false, 00:20:57.926 "nvme_io": false, 00:20:57.926 "nvme_io_md": false, 00:20:57.926 "write_zeroes": true, 00:20:57.926 "zcopy": true, 00:20:57.926 "get_zone_info": false, 00:20:57.926 "zone_management": false, 00:20:57.926 "zone_append": false, 00:20:57.926 "compare": false, 00:20:57.926 "compare_and_write": false, 00:20:57.926 "abort": true, 00:20:57.926 "seek_hole": false, 00:20:57.926 "seek_data": false, 00:20:57.926 "copy": true, 00:20:57.926 "nvme_iov_md": false 00:20:57.926 }, 00:20:57.926 "memory_domains": [ 00:20:57.926 { 00:20:57.926 "dma_device_id": "system", 00:20:57.926 "dma_device_type": 1 00:20:57.926 }, 00:20:57.926 { 00:20:57.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.926 "dma_device_type": 2 00:20:57.926 } 00:20:57.926 ], 00:20:57.926 "driver_specific": {} 00:20:57.926 }' 00:20:57.926 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.926 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.926 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.926 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.926 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.183 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.441 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:58.441 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.698 "name": "BaseBdev3", 00:20:58.698 "aliases": [ 00:20:58.698 "98c6b18b-f003-4715-88bf-750e6bd29aea" 00:20:58.698 ], 00:20:58.698 "product_name": "Malloc disk", 00:20:58.698 "block_size": 512, 00:20:58.698 "num_blocks": 65536, 00:20:58.698 "uuid": "98c6b18b-f003-4715-88bf-750e6bd29aea", 00:20:58.698 "assigned_rate_limits": { 00:20:58.698 "rw_ios_per_sec": 0, 00:20:58.698 "rw_mbytes_per_sec": 0, 00:20:58.698 "r_mbytes_per_sec": 0, 00:20:58.698 "w_mbytes_per_sec": 0 00:20:58.698 }, 00:20:58.698 "claimed": true, 00:20:58.698 "claim_type": "exclusive_write", 00:20:58.698 "zoned": false, 00:20:58.698 "supported_io_types": { 00:20:58.698 "read": true, 00:20:58.698 "write": true, 00:20:58.698 "unmap": true, 00:20:58.698 "flush": true, 00:20:58.698 "reset": true, 00:20:58.698 "nvme_admin": false, 00:20:58.698 "nvme_io": false, 00:20:58.698 "nvme_io_md": false, 00:20:58.698 "write_zeroes": true, 00:20:58.698 "zcopy": true, 00:20:58.698 "get_zone_info": false, 00:20:58.698 "zone_management": false, 00:20:58.698 "zone_append": false, 00:20:58.698 "compare": false, 00:20:58.698 "compare_and_write": false, 00:20:58.698 "abort": true, 00:20:58.698 "seek_hole": false, 00:20:58.698 "seek_data": false, 00:20:58.698 "copy": true, 00:20:58.698 "nvme_iov_md": false 00:20:58.698 }, 00:20:58.698 "memory_domains": [ 00:20:58.698 { 00:20:58.698 "dma_device_id": "system", 00:20:58.698 "dma_device_type": 1 00:20:58.698 }, 00:20:58.698 { 00:20:58.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.698 "dma_device_type": 2 00:20:58.698 } 00:20:58.698 ], 00:20:58.698 "driver_specific": {} 00:20:58.698 }' 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.698 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.956 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.956 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.956 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.956 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.956 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.956 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:59.214 [2024-07-12 08:47:34.367466] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.214 [2024-07-12 08:47:34.367701] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.214 [2024-07-12 08:47:34.367940] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.214 [2024-07-12 08:47:34.368113] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.214 [2024-07-12 08:47:34.368221] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 129139 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 129139 ']' 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 129139 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129139 00:20:59.215 killing process with pid 129139 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129139' 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 129139 00:20:59.215 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 129139 00:20:59.215 [2024-07-12 08:47:34.397203] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.472 [2024-07-12 08:47:34.649239] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.870 ************************************ 00:21:00.870 END TEST raid_state_function_test 00:21:00.870 ************************************ 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:00.871 00:21:00.871 real 0m34.885s 00:21:00.871 user 1m5.611s 00:21:00.871 sys 0m3.701s 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.871 08:47:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:00.871 08:47:35 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:21:00.871 08:47:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:00.871 08:47:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.871 08:47:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.871 ************************************ 00:21:00.871 START TEST raid_state_function_test_sb 00:21:00.871 ************************************ 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=130208 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 130208' 00:21:00.871 Process raid pid: 130208 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 130208 /var/tmp/spdk-raid.sock 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 130208 ']' 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:00.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.871 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.871 [2024-07-12 08:47:35.921841] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:21:00.871 [2024-07-12 08:47:35.922200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.129 [2024-07-12 08:47:36.082421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.387 [2024-07-12 08:47:36.340500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.387 [2024-07-12 08:47:36.550135] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.952 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.952 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:21:01.953 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:02.210 [2024-07-12 08:47:37.253497] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:02.210 [2024-07-12 08:47:37.253799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:02.210 [2024-07-12 08:47:37.253912] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.210 [2024-07-12 08:47:37.253982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.210 [2024-07-12 08:47:37.254207] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:02.210 [2024-07-12 08:47:37.254266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.210 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.468 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.468 "name": "Existed_Raid", 00:21:02.468 "uuid": "54d34e39-120d-4b3c-919f-aa2f123a6216", 00:21:02.468 "strip_size_kb": 64, 00:21:02.468 "state": "configuring", 00:21:02.468 "raid_level": "concat", 00:21:02.468 "superblock": true, 00:21:02.468 "num_base_bdevs": 3, 00:21:02.468 "num_base_bdevs_discovered": 0, 00:21:02.468 "num_base_bdevs_operational": 3, 00:21:02.468 "base_bdevs_list": [ 00:21:02.468 { 00:21:02.468 "name": "BaseBdev1", 00:21:02.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.468 "is_configured": false, 00:21:02.468 "data_offset": 0, 00:21:02.468 "data_size": 0 00:21:02.468 }, 00:21:02.468 { 00:21:02.468 "name": "BaseBdev2", 00:21:02.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.468 "is_configured": false, 00:21:02.468 "data_offset": 0, 00:21:02.468 "data_size": 0 00:21:02.468 }, 00:21:02.468 { 00:21:02.468 "name": "BaseBdev3", 00:21:02.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.468 "is_configured": false, 00:21:02.468 "data_offset": 0, 00:21:02.468 "data_size": 0 00:21:02.468 } 00:21:02.468 ] 00:21:02.468 }' 00:21:02.468 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.468 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.402 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:03.402 [2024-07-12 08:47:38.557634] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:03.402 [2024-07-12 08:47:38.557863] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:03.402 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:03.660 [2024-07-12 08:47:38.846301] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:03.660 [2024-07-12 08:47:38.846520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:03.660 [2024-07-12 08:47:38.846628] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:03.660 [2024-07-12 08:47:38.846694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:03.660 [2024-07-12 08:47:38.846801] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:03.660 [2024-07-12 08:47:38.846865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:03.918 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:04.176 [2024-07-12 08:47:39.150323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.176 BaseBdev1 00:21:04.176 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:04.176 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:04.176 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:04.176 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:04.176 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:04.176 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:04.176 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.434 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:04.692 [ 00:21:04.692 { 00:21:04.692 "name": "BaseBdev1", 00:21:04.692 "aliases": [ 00:21:04.692 "b00762da-0f77-4d9b-aa03-6efea5519ad4" 00:21:04.692 ], 00:21:04.692 "product_name": "Malloc disk", 00:21:04.692 "block_size": 512, 00:21:04.692 "num_blocks": 65536, 00:21:04.692 "uuid": "b00762da-0f77-4d9b-aa03-6efea5519ad4", 00:21:04.692 "assigned_rate_limits": { 00:21:04.692 "rw_ios_per_sec": 0, 00:21:04.692 "rw_mbytes_per_sec": 0, 00:21:04.692 "r_mbytes_per_sec": 0, 00:21:04.692 "w_mbytes_per_sec": 0 00:21:04.692 }, 00:21:04.692 "claimed": true, 00:21:04.692 "claim_type": "exclusive_write", 00:21:04.692 "zoned": false, 00:21:04.692 "supported_io_types": { 00:21:04.692 "read": true, 00:21:04.692 "write": true, 00:21:04.692 "unmap": true, 00:21:04.692 "flush": true, 00:21:04.692 "reset": true, 00:21:04.692 "nvme_admin": false, 00:21:04.692 "nvme_io": false, 00:21:04.692 "nvme_io_md": false, 00:21:04.692 "write_zeroes": true, 00:21:04.692 "zcopy": true, 00:21:04.692 "get_zone_info": false, 00:21:04.692 "zone_management": false, 00:21:04.692 "zone_append": false, 00:21:04.692 "compare": false, 00:21:04.692 "compare_and_write": false, 00:21:04.692 "abort": true, 00:21:04.692 "seek_hole": false, 00:21:04.692 "seek_data": false, 00:21:04.692 "copy": true, 00:21:04.692 "nvme_iov_md": false 00:21:04.692 }, 00:21:04.692 "memory_domains": [ 00:21:04.692 { 00:21:04.692 "dma_device_id": "system", 00:21:04.692 "dma_device_type": 1 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.692 "dma_device_type": 2 00:21:04.693 } 00:21:04.693 ], 00:21:04.693 "driver_specific": {} 00:21:04.693 } 00:21:04.693 ] 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.693 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.951 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.951 "name": "Existed_Raid", 00:21:04.951 "uuid": "e7ec0acd-7d6c-4f3d-a903-85a3af08aa5f", 00:21:04.951 "strip_size_kb": 64, 00:21:04.951 "state": "configuring", 00:21:04.951 "raid_level": "concat", 00:21:04.951 "superblock": true, 00:21:04.951 "num_base_bdevs": 3, 00:21:04.951 "num_base_bdevs_discovered": 1, 00:21:04.951 "num_base_bdevs_operational": 3, 00:21:04.951 "base_bdevs_list": [ 00:21:04.951 { 00:21:04.951 "name": "BaseBdev1", 00:21:04.951 "uuid": "b00762da-0f77-4d9b-aa03-6efea5519ad4", 00:21:04.951 "is_configured": true, 00:21:04.951 "data_offset": 2048, 00:21:04.951 "data_size": 63488 00:21:04.951 }, 00:21:04.951 { 00:21:04.951 "name": "BaseBdev2", 00:21:04.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.951 "is_configured": false, 00:21:04.951 "data_offset": 0, 00:21:04.951 "data_size": 0 00:21:04.951 }, 00:21:04.951 { 00:21:04.951 "name": "BaseBdev3", 00:21:04.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.951 "is_configured": false, 00:21:04.951 "data_offset": 0, 00:21:04.951 "data_size": 0 00:21:04.951 } 00:21:04.951 ] 00:21:04.951 }' 00:21:04.951 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.951 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.916 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:05.916 [2024-07-12 08:47:41.030857] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:05.916 [2024-07-12 08:47:41.031119] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:21:05.916 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:06.197 [2024-07-12 08:47:41.322967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.197 [2024-07-12 08:47:41.325334] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:06.197 [2024-07-12 08:47:41.325525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:06.197 [2024-07-12 08:47:41.325643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:06.197 [2024-07-12 08:47:41.325779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.197 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.455 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.455 "name": "Existed_Raid", 00:21:06.455 "uuid": "4cb6ce61-0c25-4f67-8acc-a00d29149be7", 00:21:06.455 "strip_size_kb": 64, 00:21:06.455 "state": "configuring", 00:21:06.455 "raid_level": "concat", 00:21:06.455 "superblock": true, 00:21:06.455 "num_base_bdevs": 3, 00:21:06.455 "num_base_bdevs_discovered": 1, 00:21:06.455 "num_base_bdevs_operational": 3, 00:21:06.455 "base_bdevs_list": [ 00:21:06.455 { 00:21:06.455 "name": "BaseBdev1", 00:21:06.455 "uuid": "b00762da-0f77-4d9b-aa03-6efea5519ad4", 00:21:06.455 "is_configured": true, 00:21:06.455 "data_offset": 2048, 00:21:06.455 "data_size": 63488 00:21:06.455 }, 00:21:06.455 { 00:21:06.455 "name": "BaseBdev2", 00:21:06.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.455 "is_configured": false, 00:21:06.455 "data_offset": 0, 00:21:06.455 "data_size": 0 00:21:06.455 }, 00:21:06.455 { 00:21:06.455 "name": "BaseBdev3", 00:21:06.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.455 "is_configured": false, 00:21:06.455 "data_offset": 0, 00:21:06.455 "data_size": 0 00:21:06.455 } 00:21:06.455 ] 00:21:06.455 }' 00:21:06.455 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.455 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.390 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:07.648 [2024-07-12 08:47:42.684244] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:07.648 BaseBdev2 00:21:07.648 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:07.648 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:07.648 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:07.648 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:07.648 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:07.648 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:07.649 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.906 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:08.165 [ 00:21:08.165 { 00:21:08.165 "name": "BaseBdev2", 00:21:08.165 "aliases": [ 00:21:08.165 "e42c5763-9eb0-430c-8647-1943d1afe379" 00:21:08.165 ], 00:21:08.165 "product_name": "Malloc disk", 00:21:08.165 "block_size": 512, 00:21:08.165 "num_blocks": 65536, 00:21:08.165 "uuid": "e42c5763-9eb0-430c-8647-1943d1afe379", 00:21:08.165 "assigned_rate_limits": { 00:21:08.165 "rw_ios_per_sec": 0, 00:21:08.165 "rw_mbytes_per_sec": 0, 00:21:08.165 "r_mbytes_per_sec": 0, 00:21:08.165 "w_mbytes_per_sec": 0 00:21:08.165 }, 00:21:08.165 "claimed": true, 00:21:08.165 "claim_type": "exclusive_write", 00:21:08.165 "zoned": false, 00:21:08.165 "supported_io_types": { 00:21:08.165 "read": true, 00:21:08.165 "write": true, 00:21:08.165 "unmap": true, 00:21:08.165 "flush": true, 00:21:08.165 "reset": true, 00:21:08.165 "nvme_admin": false, 00:21:08.165 "nvme_io": false, 00:21:08.165 "nvme_io_md": false, 00:21:08.165 "write_zeroes": true, 00:21:08.165 "zcopy": true, 00:21:08.165 "get_zone_info": false, 00:21:08.165 "zone_management": false, 00:21:08.165 "zone_append": false, 00:21:08.165 "compare": false, 00:21:08.165 "compare_and_write": false, 00:21:08.165 "abort": true, 00:21:08.165 "seek_hole": false, 00:21:08.165 "seek_data": false, 00:21:08.165 "copy": true, 00:21:08.165 "nvme_iov_md": false 00:21:08.165 }, 00:21:08.165 "memory_domains": [ 00:21:08.165 { 00:21:08.165 "dma_device_id": "system", 00:21:08.165 "dma_device_type": 1 00:21:08.165 }, 00:21:08.165 { 00:21:08.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.165 "dma_device_type": 2 00:21:08.165 } 00:21:08.165 ], 00:21:08.165 "driver_specific": {} 00:21:08.165 } 00:21:08.165 ] 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.165 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.423 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.423 "name": "Existed_Raid", 00:21:08.423 "uuid": "4cb6ce61-0c25-4f67-8acc-a00d29149be7", 00:21:08.423 "strip_size_kb": 64, 00:21:08.423 "state": "configuring", 00:21:08.423 "raid_level": "concat", 00:21:08.423 "superblock": true, 00:21:08.423 "num_base_bdevs": 3, 00:21:08.423 "num_base_bdevs_discovered": 2, 00:21:08.423 "num_base_bdevs_operational": 3, 00:21:08.423 "base_bdevs_list": [ 00:21:08.423 { 00:21:08.423 "name": "BaseBdev1", 00:21:08.423 "uuid": "b00762da-0f77-4d9b-aa03-6efea5519ad4", 00:21:08.423 "is_configured": true, 00:21:08.423 "data_offset": 2048, 00:21:08.423 "data_size": 63488 00:21:08.423 }, 00:21:08.423 { 00:21:08.423 "name": "BaseBdev2", 00:21:08.423 "uuid": "e42c5763-9eb0-430c-8647-1943d1afe379", 00:21:08.423 "is_configured": true, 00:21:08.423 "data_offset": 2048, 00:21:08.423 "data_size": 63488 00:21:08.423 }, 00:21:08.423 { 00:21:08.423 "name": "BaseBdev3", 00:21:08.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.423 "is_configured": false, 00:21:08.423 "data_offset": 0, 00:21:08.423 "data_size": 0 00:21:08.423 } 00:21:08.423 ] 00:21:08.423 }' 00:21:08.423 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.423 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.989 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:09.248 [2024-07-12 08:47:44.426076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.248 [2024-07-12 08:47:44.426537] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:21:09.248 BaseBdev3 00:21:09.248 [2024-07-12 08:47:44.427102] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:09.248 [2024-07-12 08:47:44.427361] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:09.248 [2024-07-12 08:47:44.427876] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:21:09.248 [2024-07-12 08:47:44.436376] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:21:09.248 [2024-07-12 08:47:44.436851] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.248 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:09.248 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:09.248 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:09.248 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:09.248 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:09.248 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:09.248 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:09.506 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:09.765 [ 00:21:09.765 { 00:21:09.765 "name": "BaseBdev3", 00:21:09.765 "aliases": [ 00:21:09.765 "90c59952-d175-4807-8116-3272ef123ac4" 00:21:09.765 ], 00:21:09.765 "product_name": "Malloc disk", 00:21:09.765 "block_size": 512, 00:21:09.765 "num_blocks": 65536, 00:21:09.765 "uuid": "90c59952-d175-4807-8116-3272ef123ac4", 00:21:09.765 "assigned_rate_limits": { 00:21:09.765 "rw_ios_per_sec": 0, 00:21:09.765 "rw_mbytes_per_sec": 0, 00:21:09.765 "r_mbytes_per_sec": 0, 00:21:09.765 "w_mbytes_per_sec": 0 00:21:09.765 }, 00:21:09.765 "claimed": true, 00:21:09.765 "claim_type": "exclusive_write", 00:21:09.765 "zoned": false, 00:21:09.765 "supported_io_types": { 00:21:09.765 "read": true, 00:21:09.765 "write": true, 00:21:09.765 "unmap": true, 00:21:09.765 "flush": true, 00:21:09.765 "reset": true, 00:21:09.765 "nvme_admin": false, 00:21:09.765 "nvme_io": false, 00:21:09.765 "nvme_io_md": false, 00:21:09.765 "write_zeroes": true, 00:21:09.765 "zcopy": true, 00:21:09.765 "get_zone_info": false, 00:21:09.765 "zone_management": false, 00:21:09.765 "zone_append": false, 00:21:09.765 "compare": false, 00:21:09.765 "compare_and_write": false, 00:21:09.765 "abort": true, 00:21:09.765 "seek_hole": false, 00:21:09.765 "seek_data": false, 00:21:09.765 "copy": true, 00:21:09.765 "nvme_iov_md": false 00:21:09.765 }, 00:21:09.765 "memory_domains": [ 00:21:09.765 { 00:21:09.765 "dma_device_id": "system", 00:21:09.765 "dma_device_type": 1 00:21:09.765 }, 00:21:09.765 { 00:21:09.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.765 "dma_device_type": 2 00:21:09.765 } 00:21:09.765 ], 00:21:09.765 "driver_specific": {} 00:21:09.765 } 00:21:09.765 ] 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.765 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.024 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.024 "name": "Existed_Raid", 00:21:10.024 "uuid": "4cb6ce61-0c25-4f67-8acc-a00d29149be7", 00:21:10.024 "strip_size_kb": 64, 00:21:10.024 "state": "online", 00:21:10.024 "raid_level": "concat", 00:21:10.024 "superblock": true, 00:21:10.024 "num_base_bdevs": 3, 00:21:10.024 "num_base_bdevs_discovered": 3, 00:21:10.024 "num_base_bdevs_operational": 3, 00:21:10.024 "base_bdevs_list": [ 00:21:10.024 { 00:21:10.024 "name": "BaseBdev1", 00:21:10.024 "uuid": "b00762da-0f77-4d9b-aa03-6efea5519ad4", 00:21:10.024 "is_configured": true, 00:21:10.024 "data_offset": 2048, 00:21:10.024 "data_size": 63488 00:21:10.024 }, 00:21:10.024 { 00:21:10.024 "name": "BaseBdev2", 00:21:10.024 "uuid": "e42c5763-9eb0-430c-8647-1943d1afe379", 00:21:10.024 "is_configured": true, 00:21:10.024 "data_offset": 2048, 00:21:10.024 "data_size": 63488 00:21:10.024 }, 00:21:10.024 { 00:21:10.024 "name": "BaseBdev3", 00:21:10.024 "uuid": "90c59952-d175-4807-8116-3272ef123ac4", 00:21:10.024 "is_configured": true, 00:21:10.024 "data_offset": 2048, 00:21:10.024 "data_size": 63488 00:21:10.024 } 00:21:10.024 ] 00:21:10.024 }' 00:21:10.024 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.024 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:10.960 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:10.960 [2024-07-12 08:47:46.137146] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.960 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:10.960 "name": "Existed_Raid", 00:21:10.960 "aliases": [ 00:21:10.960 "4cb6ce61-0c25-4f67-8acc-a00d29149be7" 00:21:10.960 ], 00:21:10.960 "product_name": "Raid Volume", 00:21:10.960 "block_size": 512, 00:21:10.960 "num_blocks": 190464, 00:21:10.960 "uuid": "4cb6ce61-0c25-4f67-8acc-a00d29149be7", 00:21:10.960 "assigned_rate_limits": { 00:21:10.960 "rw_ios_per_sec": 0, 00:21:10.960 "rw_mbytes_per_sec": 0, 00:21:10.960 "r_mbytes_per_sec": 0, 00:21:10.960 "w_mbytes_per_sec": 0 00:21:10.960 }, 00:21:10.960 "claimed": false, 00:21:10.960 "zoned": false, 00:21:10.960 "supported_io_types": { 00:21:10.960 "read": true, 00:21:10.960 "write": true, 00:21:10.960 "unmap": true, 00:21:10.960 "flush": true, 00:21:10.960 "reset": true, 00:21:10.960 "nvme_admin": false, 00:21:10.960 "nvme_io": false, 00:21:10.960 "nvme_io_md": false, 00:21:10.960 "write_zeroes": true, 00:21:10.960 "zcopy": false, 00:21:10.960 "get_zone_info": false, 00:21:10.960 "zone_management": false, 00:21:10.960 "zone_append": false, 00:21:10.960 "compare": false, 00:21:10.960 "compare_and_write": false, 00:21:10.960 "abort": false, 00:21:10.960 "seek_hole": false, 00:21:10.960 "seek_data": false, 00:21:10.960 "copy": false, 00:21:10.960 "nvme_iov_md": false 00:21:10.960 }, 00:21:10.960 "memory_domains": [ 00:21:10.960 { 00:21:10.960 "dma_device_id": "system", 00:21:10.960 "dma_device_type": 1 00:21:10.960 }, 00:21:10.960 { 00:21:10.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.960 "dma_device_type": 2 00:21:10.961 }, 00:21:10.961 { 00:21:10.961 "dma_device_id": "system", 00:21:10.961 "dma_device_type": 1 00:21:10.961 }, 00:21:10.961 { 00:21:10.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.961 "dma_device_type": 2 00:21:10.961 }, 00:21:10.961 { 00:21:10.961 "dma_device_id": "system", 00:21:10.961 "dma_device_type": 1 00:21:10.961 }, 00:21:10.961 { 00:21:10.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.961 "dma_device_type": 2 00:21:10.961 } 00:21:10.961 ], 00:21:10.961 "driver_specific": { 00:21:10.961 "raid": { 00:21:10.961 "uuid": "4cb6ce61-0c25-4f67-8acc-a00d29149be7", 00:21:10.961 "strip_size_kb": 64, 00:21:10.961 "state": "online", 00:21:10.961 "raid_level": "concat", 00:21:10.961 "superblock": true, 00:21:10.961 "num_base_bdevs": 3, 00:21:10.961 "num_base_bdevs_discovered": 3, 00:21:10.961 "num_base_bdevs_operational": 3, 00:21:10.961 "base_bdevs_list": [ 00:21:10.961 { 00:21:10.961 "name": "BaseBdev1", 00:21:10.961 "uuid": "b00762da-0f77-4d9b-aa03-6efea5519ad4", 00:21:10.961 "is_configured": true, 00:21:10.961 "data_offset": 2048, 00:21:10.961 "data_size": 63488 00:21:10.961 }, 00:21:10.961 { 00:21:10.961 "name": "BaseBdev2", 00:21:10.961 "uuid": "e42c5763-9eb0-430c-8647-1943d1afe379", 00:21:10.961 "is_configured": true, 00:21:10.961 "data_offset": 2048, 00:21:10.961 "data_size": 63488 00:21:10.961 }, 00:21:10.961 { 00:21:10.961 "name": "BaseBdev3", 00:21:10.961 "uuid": "90c59952-d175-4807-8116-3272ef123ac4", 00:21:10.961 "is_configured": true, 00:21:10.961 "data_offset": 2048, 00:21:10.961 "data_size": 63488 00:21:10.961 } 00:21:10.961 ] 00:21:10.961 } 00:21:10.961 } 00:21:10.961 }' 00:21:11.219 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.219 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:11.219 BaseBdev2 00:21:11.219 BaseBdev3' 00:21:11.219 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.219 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:11.219 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.477 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.477 "name": "BaseBdev1", 00:21:11.477 "aliases": [ 00:21:11.477 "b00762da-0f77-4d9b-aa03-6efea5519ad4" 00:21:11.477 ], 00:21:11.477 "product_name": "Malloc disk", 00:21:11.477 "block_size": 512, 00:21:11.477 "num_blocks": 65536, 00:21:11.477 "uuid": "b00762da-0f77-4d9b-aa03-6efea5519ad4", 00:21:11.477 "assigned_rate_limits": { 00:21:11.477 "rw_ios_per_sec": 0, 00:21:11.477 "rw_mbytes_per_sec": 0, 00:21:11.477 "r_mbytes_per_sec": 0, 00:21:11.477 "w_mbytes_per_sec": 0 00:21:11.477 }, 00:21:11.477 "claimed": true, 00:21:11.477 "claim_type": "exclusive_write", 00:21:11.477 "zoned": false, 00:21:11.477 "supported_io_types": { 00:21:11.477 "read": true, 00:21:11.477 "write": true, 00:21:11.477 "unmap": true, 00:21:11.477 "flush": true, 00:21:11.477 "reset": true, 00:21:11.477 "nvme_admin": false, 00:21:11.477 "nvme_io": false, 00:21:11.477 "nvme_io_md": false, 00:21:11.477 "write_zeroes": true, 00:21:11.477 "zcopy": true, 00:21:11.477 "get_zone_info": false, 00:21:11.477 "zone_management": false, 00:21:11.477 "zone_append": false, 00:21:11.477 "compare": false, 00:21:11.477 "compare_and_write": false, 00:21:11.477 "abort": true, 00:21:11.477 "seek_hole": false, 00:21:11.477 "seek_data": false, 00:21:11.477 "copy": true, 00:21:11.477 "nvme_iov_md": false 00:21:11.477 }, 00:21:11.477 "memory_domains": [ 00:21:11.477 { 00:21:11.477 "dma_device_id": "system", 00:21:11.477 "dma_device_type": 1 00:21:11.477 }, 00:21:11.477 { 00:21:11.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.478 "dma_device_type": 2 00:21:11.478 } 00:21:11.478 ], 00:21:11.478 "driver_specific": {} 00:21:11.478 }' 00:21:11.478 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.478 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.478 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.478 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.478 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:11.736 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.995 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.995 "name": "BaseBdev2", 00:21:11.995 "aliases": [ 00:21:11.995 "e42c5763-9eb0-430c-8647-1943d1afe379" 00:21:11.995 ], 00:21:11.995 "product_name": "Malloc disk", 00:21:11.995 "block_size": 512, 00:21:11.995 "num_blocks": 65536, 00:21:11.995 "uuid": "e42c5763-9eb0-430c-8647-1943d1afe379", 00:21:11.995 "assigned_rate_limits": { 00:21:11.995 "rw_ios_per_sec": 0, 00:21:11.995 "rw_mbytes_per_sec": 0, 00:21:11.995 "r_mbytes_per_sec": 0, 00:21:11.995 "w_mbytes_per_sec": 0 00:21:11.995 }, 00:21:11.995 "claimed": true, 00:21:11.995 "claim_type": "exclusive_write", 00:21:11.995 "zoned": false, 00:21:11.995 "supported_io_types": { 00:21:11.995 "read": true, 00:21:11.995 "write": true, 00:21:11.995 "unmap": true, 00:21:11.995 "flush": true, 00:21:11.995 "reset": true, 00:21:11.995 "nvme_admin": false, 00:21:11.995 "nvme_io": false, 00:21:11.995 "nvme_io_md": false, 00:21:11.995 "write_zeroes": true, 00:21:11.995 "zcopy": true, 00:21:11.995 "get_zone_info": false, 00:21:11.995 "zone_management": false, 00:21:11.995 "zone_append": false, 00:21:11.995 "compare": false, 00:21:11.995 "compare_and_write": false, 00:21:11.995 "abort": true, 00:21:11.995 "seek_hole": false, 00:21:11.995 "seek_data": false, 00:21:11.995 "copy": true, 00:21:11.995 "nvme_iov_md": false 00:21:11.995 }, 00:21:11.995 "memory_domains": [ 00:21:11.995 { 00:21:11.995 "dma_device_id": "system", 00:21:11.995 "dma_device_type": 1 00:21:11.995 }, 00:21:11.995 { 00:21:11.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.995 "dma_device_type": 2 00:21:11.995 } 00:21:11.995 ], 00:21:11.995 "driver_specific": {} 00:21:11.995 }' 00:21:11.995 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.255 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.255 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.255 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.255 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.255 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.255 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:12.512 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.770 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.770 "name": "BaseBdev3", 00:21:12.770 "aliases": [ 00:21:12.770 "90c59952-d175-4807-8116-3272ef123ac4" 00:21:12.770 ], 00:21:12.770 "product_name": "Malloc disk", 00:21:12.770 "block_size": 512, 00:21:12.770 "num_blocks": 65536, 00:21:12.770 "uuid": "90c59952-d175-4807-8116-3272ef123ac4", 00:21:12.770 "assigned_rate_limits": { 00:21:12.770 "rw_ios_per_sec": 0, 00:21:12.770 "rw_mbytes_per_sec": 0, 00:21:12.770 "r_mbytes_per_sec": 0, 00:21:12.770 "w_mbytes_per_sec": 0 00:21:12.770 }, 00:21:12.770 "claimed": true, 00:21:12.770 "claim_type": "exclusive_write", 00:21:12.770 "zoned": false, 00:21:12.770 "supported_io_types": { 00:21:12.770 "read": true, 00:21:12.770 "write": true, 00:21:12.770 "unmap": true, 00:21:12.770 "flush": true, 00:21:12.770 "reset": true, 00:21:12.770 "nvme_admin": false, 00:21:12.770 "nvme_io": false, 00:21:12.770 "nvme_io_md": false, 00:21:12.770 "write_zeroes": true, 00:21:12.770 "zcopy": true, 00:21:12.770 "get_zone_info": false, 00:21:12.770 "zone_management": false, 00:21:12.770 "zone_append": false, 00:21:12.770 "compare": false, 00:21:12.770 "compare_and_write": false, 00:21:12.770 "abort": true, 00:21:12.770 "seek_hole": false, 00:21:12.770 "seek_data": false, 00:21:12.770 "copy": true, 00:21:12.770 "nvme_iov_md": false 00:21:12.770 }, 00:21:12.770 "memory_domains": [ 00:21:12.770 { 00:21:12.770 "dma_device_id": "system", 00:21:12.770 "dma_device_type": 1 00:21:12.770 }, 00:21:12.770 { 00:21:12.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.770 "dma_device_type": 2 00:21:12.770 } 00:21:12.770 ], 00:21:12.770 "driver_specific": {} 00:21:12.770 }' 00:21:12.770 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.028 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.028 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.028 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.028 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.028 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.028 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.028 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.287 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.287 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.287 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.287 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.287 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:13.545 [2024-07-12 08:47:48.597623] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.545 [2024-07-12 08:47:48.597702] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:13.545 [2024-07-12 08:47:48.597796] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.545 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.111 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.111 "name": "Existed_Raid", 00:21:14.111 "uuid": "4cb6ce61-0c25-4f67-8acc-a00d29149be7", 00:21:14.111 "strip_size_kb": 64, 00:21:14.111 "state": "offline", 00:21:14.111 "raid_level": "concat", 00:21:14.111 "superblock": true, 00:21:14.111 "num_base_bdevs": 3, 00:21:14.111 "num_base_bdevs_discovered": 2, 00:21:14.111 "num_base_bdevs_operational": 2, 00:21:14.111 "base_bdevs_list": [ 00:21:14.111 { 00:21:14.111 "name": null, 00:21:14.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.111 "is_configured": false, 00:21:14.111 "data_offset": 2048, 00:21:14.111 "data_size": 63488 00:21:14.111 }, 00:21:14.111 { 00:21:14.111 "name": "BaseBdev2", 00:21:14.111 "uuid": "e42c5763-9eb0-430c-8647-1943d1afe379", 00:21:14.111 "is_configured": true, 00:21:14.111 "data_offset": 2048, 00:21:14.111 "data_size": 63488 00:21:14.111 }, 00:21:14.111 { 00:21:14.111 "name": "BaseBdev3", 00:21:14.111 "uuid": "90c59952-d175-4807-8116-3272ef123ac4", 00:21:14.111 "is_configured": true, 00:21:14.111 "data_offset": 2048, 00:21:14.111 "data_size": 63488 00:21:14.111 } 00:21:14.111 ] 00:21:14.111 }' 00:21:14.111 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.111 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.677 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:14.677 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:14.677 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.677 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:14.936 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:14.936 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:14.936 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:15.195 [2024-07-12 08:47:50.325389] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:15.453 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:15.453 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:15.453 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.453 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:15.712 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:15.712 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:15.712 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:15.971 [2024-07-12 08:47:50.911126] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:15.971 [2024-07-12 08:47:50.911214] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:21:15.971 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:15.971 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:15.971 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.971 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:16.229 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:16.229 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:16.229 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:16.229 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:16.229 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:16.229 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:16.499 BaseBdev2 00:21:16.499 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:16.499 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:16.499 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:16.499 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:16.499 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:16.499 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:16.499 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:16.758 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:17.016 [ 00:21:17.016 { 00:21:17.017 "name": "BaseBdev2", 00:21:17.017 "aliases": [ 00:21:17.017 "ae900953-0d00-41db-93c6-4bb7b82ea87f" 00:21:17.017 ], 00:21:17.017 "product_name": "Malloc disk", 00:21:17.017 "block_size": 512, 00:21:17.017 "num_blocks": 65536, 00:21:17.017 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:17.017 "assigned_rate_limits": { 00:21:17.017 "rw_ios_per_sec": 0, 00:21:17.017 "rw_mbytes_per_sec": 0, 00:21:17.017 "r_mbytes_per_sec": 0, 00:21:17.017 "w_mbytes_per_sec": 0 00:21:17.017 }, 00:21:17.017 "claimed": false, 00:21:17.017 "zoned": false, 00:21:17.017 "supported_io_types": { 00:21:17.017 "read": true, 00:21:17.017 "write": true, 00:21:17.017 "unmap": true, 00:21:17.017 "flush": true, 00:21:17.017 "reset": true, 00:21:17.017 "nvme_admin": false, 00:21:17.017 "nvme_io": false, 00:21:17.017 "nvme_io_md": false, 00:21:17.017 "write_zeroes": true, 00:21:17.017 "zcopy": true, 00:21:17.017 "get_zone_info": false, 00:21:17.017 "zone_management": false, 00:21:17.017 "zone_append": false, 00:21:17.017 "compare": false, 00:21:17.017 "compare_and_write": false, 00:21:17.017 "abort": true, 00:21:17.017 "seek_hole": false, 00:21:17.017 "seek_data": false, 00:21:17.017 "copy": true, 00:21:17.017 "nvme_iov_md": false 00:21:17.017 }, 00:21:17.017 "memory_domains": [ 00:21:17.017 { 00:21:17.017 "dma_device_id": "system", 00:21:17.017 "dma_device_type": 1 00:21:17.017 }, 00:21:17.017 { 00:21:17.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.017 "dma_device_type": 2 00:21:17.017 } 00:21:17.017 ], 00:21:17.017 "driver_specific": {} 00:21:17.017 } 00:21:17.017 ] 00:21:17.017 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:17.017 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:17.017 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:17.017 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:17.327 BaseBdev3 00:21:17.327 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:17.327 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:17.327 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:17.327 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:17.327 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:17.327 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:17.327 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:17.601 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:17.859 [ 00:21:17.859 { 00:21:17.859 "name": "BaseBdev3", 00:21:17.859 "aliases": [ 00:21:17.859 "7c4d197b-2a56-4a26-9524-3a8bb092c26c" 00:21:17.859 ], 00:21:17.859 "product_name": "Malloc disk", 00:21:17.859 "block_size": 512, 00:21:17.859 "num_blocks": 65536, 00:21:17.859 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:17.859 "assigned_rate_limits": { 00:21:17.859 "rw_ios_per_sec": 0, 00:21:17.859 "rw_mbytes_per_sec": 0, 00:21:17.859 "r_mbytes_per_sec": 0, 00:21:17.859 "w_mbytes_per_sec": 0 00:21:17.859 }, 00:21:17.859 "claimed": false, 00:21:17.859 "zoned": false, 00:21:17.859 "supported_io_types": { 00:21:17.859 "read": true, 00:21:17.859 "write": true, 00:21:17.859 "unmap": true, 00:21:17.859 "flush": true, 00:21:17.859 "reset": true, 00:21:17.859 "nvme_admin": false, 00:21:17.859 "nvme_io": false, 00:21:17.859 "nvme_io_md": false, 00:21:17.859 "write_zeroes": true, 00:21:17.859 "zcopy": true, 00:21:17.859 "get_zone_info": false, 00:21:17.859 "zone_management": false, 00:21:17.859 "zone_append": false, 00:21:17.859 "compare": false, 00:21:17.860 "compare_and_write": false, 00:21:17.860 "abort": true, 00:21:17.860 "seek_hole": false, 00:21:17.860 "seek_data": false, 00:21:17.860 "copy": true, 00:21:17.860 "nvme_iov_md": false 00:21:17.860 }, 00:21:17.860 "memory_domains": [ 00:21:17.860 { 00:21:17.860 "dma_device_id": "system", 00:21:17.860 "dma_device_type": 1 00:21:17.860 }, 00:21:17.860 { 00:21:17.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.860 "dma_device_type": 2 00:21:17.860 } 00:21:17.860 ], 00:21:17.860 "driver_specific": {} 00:21:17.860 } 00:21:17.860 ] 00:21:17.860 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:17.860 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:17.860 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:17.860 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:18.118 [2024-07-12 08:47:53.147003] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:18.118 [2024-07-12 08:47:53.147721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:18.118 [2024-07-12 08:47:53.147810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.118 [2024-07-12 08:47:53.150206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.118 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.375 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.375 "name": "Existed_Raid", 00:21:18.375 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:18.375 "strip_size_kb": 64, 00:21:18.375 "state": "configuring", 00:21:18.375 "raid_level": "concat", 00:21:18.375 "superblock": true, 00:21:18.376 "num_base_bdevs": 3, 00:21:18.376 "num_base_bdevs_discovered": 2, 00:21:18.376 "num_base_bdevs_operational": 3, 00:21:18.376 "base_bdevs_list": [ 00:21:18.376 { 00:21:18.376 "name": "BaseBdev1", 00:21:18.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.376 "is_configured": false, 00:21:18.376 "data_offset": 0, 00:21:18.376 "data_size": 0 00:21:18.376 }, 00:21:18.376 { 00:21:18.376 "name": "BaseBdev2", 00:21:18.376 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:18.376 "is_configured": true, 00:21:18.376 "data_offset": 2048, 00:21:18.376 "data_size": 63488 00:21:18.376 }, 00:21:18.376 { 00:21:18.376 "name": "BaseBdev3", 00:21:18.376 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:18.376 "is_configured": true, 00:21:18.376 "data_offset": 2048, 00:21:18.376 "data_size": 63488 00:21:18.376 } 00:21:18.376 ] 00:21:18.376 }' 00:21:18.376 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.376 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.941 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:19.199 [2024-07-12 08:47:54.327182] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.199 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.457 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.457 "name": "Existed_Raid", 00:21:19.457 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:19.457 "strip_size_kb": 64, 00:21:19.457 "state": "configuring", 00:21:19.457 "raid_level": "concat", 00:21:19.457 "superblock": true, 00:21:19.457 "num_base_bdevs": 3, 00:21:19.457 "num_base_bdevs_discovered": 1, 00:21:19.457 "num_base_bdevs_operational": 3, 00:21:19.457 "base_bdevs_list": [ 00:21:19.457 { 00:21:19.457 "name": "BaseBdev1", 00:21:19.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.457 "is_configured": false, 00:21:19.457 "data_offset": 0, 00:21:19.457 "data_size": 0 00:21:19.457 }, 00:21:19.457 { 00:21:19.457 "name": null, 00:21:19.457 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:19.457 "is_configured": false, 00:21:19.457 "data_offset": 2048, 00:21:19.457 "data_size": 63488 00:21:19.457 }, 00:21:19.457 { 00:21:19.457 "name": "BaseBdev3", 00:21:19.458 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:19.458 "is_configured": true, 00:21:19.458 "data_offset": 2048, 00:21:19.458 "data_size": 63488 00:21:19.458 } 00:21:19.458 ] 00:21:19.458 }' 00:21:19.458 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.458 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.417 08:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.417 08:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:20.675 08:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:20.675 08:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:20.933 [2024-07-12 08:47:55.878829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.933 BaseBdev1 00:21:20.933 08:47:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:20.933 08:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:20.933 08:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:20.933 08:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:20.933 08:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:20.933 08:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:20.933 08:47:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:21.191 08:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:21.449 [ 00:21:21.449 { 00:21:21.449 "name": "BaseBdev1", 00:21:21.449 "aliases": [ 00:21:21.449 "b31bd15b-6f8f-4a1d-909a-a506ebf13de0" 00:21:21.449 ], 00:21:21.449 "product_name": "Malloc disk", 00:21:21.449 "block_size": 512, 00:21:21.449 "num_blocks": 65536, 00:21:21.449 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:21.449 "assigned_rate_limits": { 00:21:21.449 "rw_ios_per_sec": 0, 00:21:21.449 "rw_mbytes_per_sec": 0, 00:21:21.449 "r_mbytes_per_sec": 0, 00:21:21.449 "w_mbytes_per_sec": 0 00:21:21.449 }, 00:21:21.449 "claimed": true, 00:21:21.449 "claim_type": "exclusive_write", 00:21:21.449 "zoned": false, 00:21:21.449 "supported_io_types": { 00:21:21.449 "read": true, 00:21:21.449 "write": true, 00:21:21.449 "unmap": true, 00:21:21.449 "flush": true, 00:21:21.449 "reset": true, 00:21:21.449 "nvme_admin": false, 00:21:21.449 "nvme_io": false, 00:21:21.449 "nvme_io_md": false, 00:21:21.449 "write_zeroes": true, 00:21:21.449 "zcopy": true, 00:21:21.449 "get_zone_info": false, 00:21:21.449 "zone_management": false, 00:21:21.449 "zone_append": false, 00:21:21.449 "compare": false, 00:21:21.449 "compare_and_write": false, 00:21:21.449 "abort": true, 00:21:21.449 "seek_hole": false, 00:21:21.449 "seek_data": false, 00:21:21.449 "copy": true, 00:21:21.449 "nvme_iov_md": false 00:21:21.449 }, 00:21:21.449 "memory_domains": [ 00:21:21.449 { 00:21:21.449 "dma_device_id": "system", 00:21:21.449 "dma_device_type": 1 00:21:21.449 }, 00:21:21.449 { 00:21:21.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.449 "dma_device_type": 2 00:21:21.449 } 00:21:21.449 ], 00:21:21.449 "driver_specific": {} 00:21:21.449 } 00:21:21.449 ] 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.449 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.707 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.707 "name": "Existed_Raid", 00:21:21.707 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:21.707 "strip_size_kb": 64, 00:21:21.707 "state": "configuring", 00:21:21.707 "raid_level": "concat", 00:21:21.707 "superblock": true, 00:21:21.707 "num_base_bdevs": 3, 00:21:21.707 "num_base_bdevs_discovered": 2, 00:21:21.707 "num_base_bdevs_operational": 3, 00:21:21.707 "base_bdevs_list": [ 00:21:21.707 { 00:21:21.707 "name": "BaseBdev1", 00:21:21.707 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:21.707 "is_configured": true, 00:21:21.707 "data_offset": 2048, 00:21:21.707 "data_size": 63488 00:21:21.707 }, 00:21:21.707 { 00:21:21.707 "name": null, 00:21:21.707 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:21.707 "is_configured": false, 00:21:21.707 "data_offset": 2048, 00:21:21.707 "data_size": 63488 00:21:21.707 }, 00:21:21.707 { 00:21:21.707 "name": "BaseBdev3", 00:21:21.707 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:21.707 "is_configured": true, 00:21:21.707 "data_offset": 2048, 00:21:21.707 "data_size": 63488 00:21:21.707 } 00:21:21.707 ] 00:21:21.707 }' 00:21:21.707 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.707 08:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.273 08:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:22.273 08:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.837 08:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:22.837 08:47:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:23.096 [2024-07-12 08:47:58.047550] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.096 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.354 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.354 "name": "Existed_Raid", 00:21:23.354 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:23.354 "strip_size_kb": 64, 00:21:23.354 "state": "configuring", 00:21:23.354 "raid_level": "concat", 00:21:23.354 "superblock": true, 00:21:23.354 "num_base_bdevs": 3, 00:21:23.354 "num_base_bdevs_discovered": 1, 00:21:23.354 "num_base_bdevs_operational": 3, 00:21:23.354 "base_bdevs_list": [ 00:21:23.354 { 00:21:23.354 "name": "BaseBdev1", 00:21:23.354 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:23.354 "is_configured": true, 00:21:23.354 "data_offset": 2048, 00:21:23.354 "data_size": 63488 00:21:23.354 }, 00:21:23.354 { 00:21:23.354 "name": null, 00:21:23.354 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:23.354 "is_configured": false, 00:21:23.354 "data_offset": 2048, 00:21:23.354 "data_size": 63488 00:21:23.354 }, 00:21:23.354 { 00:21:23.354 "name": null, 00:21:23.354 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:23.354 "is_configured": false, 00:21:23.354 "data_offset": 2048, 00:21:23.354 "data_size": 63488 00:21:23.354 } 00:21:23.354 ] 00:21:23.354 }' 00:21:23.354 08:47:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.354 08:47:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.920 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.920 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:24.179 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:24.179 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:24.437 [2024-07-12 08:47:59.628053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.696 "name": "Existed_Raid", 00:21:24.696 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:24.696 "strip_size_kb": 64, 00:21:24.696 "state": "configuring", 00:21:24.696 "raid_level": "concat", 00:21:24.696 "superblock": true, 00:21:24.696 "num_base_bdevs": 3, 00:21:24.696 "num_base_bdevs_discovered": 2, 00:21:24.696 "num_base_bdevs_operational": 3, 00:21:24.696 "base_bdevs_list": [ 00:21:24.696 { 00:21:24.696 "name": "BaseBdev1", 00:21:24.696 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:24.696 "is_configured": true, 00:21:24.696 "data_offset": 2048, 00:21:24.696 "data_size": 63488 00:21:24.696 }, 00:21:24.696 { 00:21:24.696 "name": null, 00:21:24.696 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:24.696 "is_configured": false, 00:21:24.696 "data_offset": 2048, 00:21:24.696 "data_size": 63488 00:21:24.696 }, 00:21:24.696 { 00:21:24.696 "name": "BaseBdev3", 00:21:24.696 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:24.696 "is_configured": true, 00:21:24.696 "data_offset": 2048, 00:21:24.696 "data_size": 63488 00:21:24.696 } 00:21:24.696 ] 00:21:24.696 }' 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.696 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.630 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.630 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:25.888 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:25.888 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:26.147 [2024-07-12 08:48:01.180453] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.147 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.405 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.405 "name": "Existed_Raid", 00:21:26.405 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:26.405 "strip_size_kb": 64, 00:21:26.405 "state": "configuring", 00:21:26.405 "raid_level": "concat", 00:21:26.405 "superblock": true, 00:21:26.405 "num_base_bdevs": 3, 00:21:26.405 "num_base_bdevs_discovered": 1, 00:21:26.405 "num_base_bdevs_operational": 3, 00:21:26.405 "base_bdevs_list": [ 00:21:26.405 { 00:21:26.405 "name": null, 00:21:26.405 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:26.405 "is_configured": false, 00:21:26.405 "data_offset": 2048, 00:21:26.405 "data_size": 63488 00:21:26.405 }, 00:21:26.405 { 00:21:26.405 "name": null, 00:21:26.405 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:26.405 "is_configured": false, 00:21:26.405 "data_offset": 2048, 00:21:26.405 "data_size": 63488 00:21:26.405 }, 00:21:26.405 { 00:21:26.405 "name": "BaseBdev3", 00:21:26.405 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:26.405 "is_configured": true, 00:21:26.405 "data_offset": 2048, 00:21:26.405 "data_size": 63488 00:21:26.405 } 00:21:26.405 ] 00:21:26.405 }' 00:21:26.405 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.405 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.339 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.339 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:27.596 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:27.596 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:27.596 [2024-07-12 08:48:02.776137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:27.853 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:27.853 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:27.853 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:27.853 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:27.853 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.854 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:27.854 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.854 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.854 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.854 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.854 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.854 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.111 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.111 "name": "Existed_Raid", 00:21:28.111 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:28.111 "strip_size_kb": 64, 00:21:28.111 "state": "configuring", 00:21:28.111 "raid_level": "concat", 00:21:28.111 "superblock": true, 00:21:28.111 "num_base_bdevs": 3, 00:21:28.111 "num_base_bdevs_discovered": 2, 00:21:28.111 "num_base_bdevs_operational": 3, 00:21:28.111 "base_bdevs_list": [ 00:21:28.111 { 00:21:28.111 "name": null, 00:21:28.111 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:28.111 "is_configured": false, 00:21:28.111 "data_offset": 2048, 00:21:28.111 "data_size": 63488 00:21:28.111 }, 00:21:28.111 { 00:21:28.111 "name": "BaseBdev2", 00:21:28.111 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:28.111 "is_configured": true, 00:21:28.111 "data_offset": 2048, 00:21:28.111 "data_size": 63488 00:21:28.111 }, 00:21:28.111 { 00:21:28.111 "name": "BaseBdev3", 00:21:28.111 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:28.111 "is_configured": true, 00:21:28.111 "data_offset": 2048, 00:21:28.111 "data_size": 63488 00:21:28.111 } 00:21:28.111 ] 00:21:28.111 }' 00:21:28.111 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.111 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.676 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.676 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:28.936 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:28.936 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.936 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:29.196 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b31bd15b-6f8f-4a1d-909a-a506ebf13de0 00:21:29.454 [2024-07-12 08:48:04.623032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:29.454 [2024-07-12 08:48:04.623301] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:29.454 [2024-07-12 08:48:04.623333] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:29.454 [2024-07-12 08:48:04.623472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:29.454 NewBaseBdev 00:21:29.454 [2024-07-12 08:48:04.623867] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:29.454 [2024-07-12 08:48:04.623896] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:21:29.454 [2024-07-12 08:48:04.624042] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.454 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:29.454 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:29.454 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:29.454 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:29.454 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:29.454 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:29.454 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:29.712 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:29.971 [ 00:21:29.971 { 00:21:29.971 "name": "NewBaseBdev", 00:21:29.971 "aliases": [ 00:21:29.971 "b31bd15b-6f8f-4a1d-909a-a506ebf13de0" 00:21:29.971 ], 00:21:29.971 "product_name": "Malloc disk", 00:21:29.971 "block_size": 512, 00:21:29.971 "num_blocks": 65536, 00:21:29.971 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:29.971 "assigned_rate_limits": { 00:21:29.971 "rw_ios_per_sec": 0, 00:21:29.971 "rw_mbytes_per_sec": 0, 00:21:29.971 "r_mbytes_per_sec": 0, 00:21:29.971 "w_mbytes_per_sec": 0 00:21:29.971 }, 00:21:29.971 "claimed": true, 00:21:29.971 "claim_type": "exclusive_write", 00:21:29.971 "zoned": false, 00:21:29.971 "supported_io_types": { 00:21:29.971 "read": true, 00:21:29.971 "write": true, 00:21:29.971 "unmap": true, 00:21:29.971 "flush": true, 00:21:29.971 "reset": true, 00:21:29.971 "nvme_admin": false, 00:21:29.971 "nvme_io": false, 00:21:29.971 "nvme_io_md": false, 00:21:29.971 "write_zeroes": true, 00:21:29.971 "zcopy": true, 00:21:29.971 "get_zone_info": false, 00:21:29.971 "zone_management": false, 00:21:29.971 "zone_append": false, 00:21:29.971 "compare": false, 00:21:29.971 "compare_and_write": false, 00:21:29.971 "abort": true, 00:21:29.971 "seek_hole": false, 00:21:29.971 "seek_data": false, 00:21:29.971 "copy": true, 00:21:29.971 "nvme_iov_md": false 00:21:29.971 }, 00:21:29.971 "memory_domains": [ 00:21:29.971 { 00:21:29.971 "dma_device_id": "system", 00:21:29.971 "dma_device_type": 1 00:21:29.971 }, 00:21:29.971 { 00:21:29.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.971 "dma_device_type": 2 00:21:29.971 } 00:21:29.971 ], 00:21:29.971 "driver_specific": {} 00:21:29.971 } 00:21:29.971 ] 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.971 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.229 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.229 "name": "Existed_Raid", 00:21:30.229 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:30.229 "strip_size_kb": 64, 00:21:30.229 "state": "online", 00:21:30.229 "raid_level": "concat", 00:21:30.229 "superblock": true, 00:21:30.229 "num_base_bdevs": 3, 00:21:30.229 "num_base_bdevs_discovered": 3, 00:21:30.229 "num_base_bdevs_operational": 3, 00:21:30.229 "base_bdevs_list": [ 00:21:30.229 { 00:21:30.229 "name": "NewBaseBdev", 00:21:30.229 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:30.229 "is_configured": true, 00:21:30.229 "data_offset": 2048, 00:21:30.229 "data_size": 63488 00:21:30.229 }, 00:21:30.229 { 00:21:30.229 "name": "BaseBdev2", 00:21:30.229 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:30.229 "is_configured": true, 00:21:30.229 "data_offset": 2048, 00:21:30.229 "data_size": 63488 00:21:30.229 }, 00:21:30.229 { 00:21:30.229 "name": "BaseBdev3", 00:21:30.229 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:30.229 "is_configured": true, 00:21:30.229 "data_offset": 2048, 00:21:30.229 "data_size": 63488 00:21:30.229 } 00:21:30.229 ] 00:21:30.229 }' 00:21:30.229 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.229 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:31.164 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:31.423 [2024-07-12 08:48:06.364090] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:31.423 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:31.423 "name": "Existed_Raid", 00:21:31.423 "aliases": [ 00:21:31.423 "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65" 00:21:31.423 ], 00:21:31.423 "product_name": "Raid Volume", 00:21:31.423 "block_size": 512, 00:21:31.423 "num_blocks": 190464, 00:21:31.423 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:31.423 "assigned_rate_limits": { 00:21:31.423 "rw_ios_per_sec": 0, 00:21:31.423 "rw_mbytes_per_sec": 0, 00:21:31.423 "r_mbytes_per_sec": 0, 00:21:31.423 "w_mbytes_per_sec": 0 00:21:31.423 }, 00:21:31.423 "claimed": false, 00:21:31.423 "zoned": false, 00:21:31.423 "supported_io_types": { 00:21:31.423 "read": true, 00:21:31.423 "write": true, 00:21:31.423 "unmap": true, 00:21:31.423 "flush": true, 00:21:31.423 "reset": true, 00:21:31.423 "nvme_admin": false, 00:21:31.423 "nvme_io": false, 00:21:31.423 "nvme_io_md": false, 00:21:31.423 "write_zeroes": true, 00:21:31.423 "zcopy": false, 00:21:31.423 "get_zone_info": false, 00:21:31.423 "zone_management": false, 00:21:31.423 "zone_append": false, 00:21:31.423 "compare": false, 00:21:31.423 "compare_and_write": false, 00:21:31.423 "abort": false, 00:21:31.423 "seek_hole": false, 00:21:31.423 "seek_data": false, 00:21:31.423 "copy": false, 00:21:31.423 "nvme_iov_md": false 00:21:31.423 }, 00:21:31.423 "memory_domains": [ 00:21:31.423 { 00:21:31.423 "dma_device_id": "system", 00:21:31.423 "dma_device_type": 1 00:21:31.424 }, 00:21:31.424 { 00:21:31.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.424 "dma_device_type": 2 00:21:31.424 }, 00:21:31.424 { 00:21:31.424 "dma_device_id": "system", 00:21:31.424 "dma_device_type": 1 00:21:31.424 }, 00:21:31.424 { 00:21:31.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.424 "dma_device_type": 2 00:21:31.424 }, 00:21:31.424 { 00:21:31.424 "dma_device_id": "system", 00:21:31.424 "dma_device_type": 1 00:21:31.424 }, 00:21:31.424 { 00:21:31.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.424 "dma_device_type": 2 00:21:31.424 } 00:21:31.424 ], 00:21:31.424 "driver_specific": { 00:21:31.424 "raid": { 00:21:31.424 "uuid": "10c0b4e2-8b83-4f8c-ba4a-368fecdcde65", 00:21:31.424 "strip_size_kb": 64, 00:21:31.424 "state": "online", 00:21:31.424 "raid_level": "concat", 00:21:31.424 "superblock": true, 00:21:31.424 "num_base_bdevs": 3, 00:21:31.424 "num_base_bdevs_discovered": 3, 00:21:31.424 "num_base_bdevs_operational": 3, 00:21:31.424 "base_bdevs_list": [ 00:21:31.424 { 00:21:31.424 "name": "NewBaseBdev", 00:21:31.424 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:31.424 "is_configured": true, 00:21:31.424 "data_offset": 2048, 00:21:31.424 "data_size": 63488 00:21:31.424 }, 00:21:31.424 { 00:21:31.424 "name": "BaseBdev2", 00:21:31.424 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:31.424 "is_configured": true, 00:21:31.424 "data_offset": 2048, 00:21:31.424 "data_size": 63488 00:21:31.424 }, 00:21:31.424 { 00:21:31.424 "name": "BaseBdev3", 00:21:31.424 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:31.424 "is_configured": true, 00:21:31.424 "data_offset": 2048, 00:21:31.424 "data_size": 63488 00:21:31.424 } 00:21:31.424 ] 00:21:31.424 } 00:21:31.424 } 00:21:31.424 }' 00:21:31.424 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:31.424 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:31.424 BaseBdev2 00:21:31.424 BaseBdev3' 00:21:31.424 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:31.424 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:31.424 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:31.682 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:31.682 "name": "NewBaseBdev", 00:21:31.682 "aliases": [ 00:21:31.682 "b31bd15b-6f8f-4a1d-909a-a506ebf13de0" 00:21:31.682 ], 00:21:31.682 "product_name": "Malloc disk", 00:21:31.682 "block_size": 512, 00:21:31.682 "num_blocks": 65536, 00:21:31.682 "uuid": "b31bd15b-6f8f-4a1d-909a-a506ebf13de0", 00:21:31.682 "assigned_rate_limits": { 00:21:31.682 "rw_ios_per_sec": 0, 00:21:31.682 "rw_mbytes_per_sec": 0, 00:21:31.682 "r_mbytes_per_sec": 0, 00:21:31.682 "w_mbytes_per_sec": 0 00:21:31.682 }, 00:21:31.682 "claimed": true, 00:21:31.682 "claim_type": "exclusive_write", 00:21:31.682 "zoned": false, 00:21:31.682 "supported_io_types": { 00:21:31.682 "read": true, 00:21:31.682 "write": true, 00:21:31.682 "unmap": true, 00:21:31.682 "flush": true, 00:21:31.682 "reset": true, 00:21:31.682 "nvme_admin": false, 00:21:31.682 "nvme_io": false, 00:21:31.682 "nvme_io_md": false, 00:21:31.682 "write_zeroes": true, 00:21:31.682 "zcopy": true, 00:21:31.682 "get_zone_info": false, 00:21:31.682 "zone_management": false, 00:21:31.682 "zone_append": false, 00:21:31.682 "compare": false, 00:21:31.682 "compare_and_write": false, 00:21:31.682 "abort": true, 00:21:31.682 "seek_hole": false, 00:21:31.683 "seek_data": false, 00:21:31.683 "copy": true, 00:21:31.683 "nvme_iov_md": false 00:21:31.683 }, 00:21:31.683 "memory_domains": [ 00:21:31.683 { 00:21:31.683 "dma_device_id": "system", 00:21:31.683 "dma_device_type": 1 00:21:31.683 }, 00:21:31.683 { 00:21:31.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.683 "dma_device_type": 2 00:21:31.683 } 00:21:31.683 ], 00:21:31.683 "driver_specific": {} 00:21:31.683 }' 00:21:31.683 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.683 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:31.683 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:31.683 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.941 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:31.941 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:31.941 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.941 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:31.941 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:31.941 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:31.941 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.200 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.200 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:32.200 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:32.200 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:32.458 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:32.458 "name": "BaseBdev2", 00:21:32.458 "aliases": [ 00:21:32.458 "ae900953-0d00-41db-93c6-4bb7b82ea87f" 00:21:32.458 ], 00:21:32.458 "product_name": "Malloc disk", 00:21:32.458 "block_size": 512, 00:21:32.458 "num_blocks": 65536, 00:21:32.458 "uuid": "ae900953-0d00-41db-93c6-4bb7b82ea87f", 00:21:32.458 "assigned_rate_limits": { 00:21:32.458 "rw_ios_per_sec": 0, 00:21:32.458 "rw_mbytes_per_sec": 0, 00:21:32.459 "r_mbytes_per_sec": 0, 00:21:32.459 "w_mbytes_per_sec": 0 00:21:32.459 }, 00:21:32.459 "claimed": true, 00:21:32.459 "claim_type": "exclusive_write", 00:21:32.459 "zoned": false, 00:21:32.459 "supported_io_types": { 00:21:32.459 "read": true, 00:21:32.459 "write": true, 00:21:32.459 "unmap": true, 00:21:32.459 "flush": true, 00:21:32.459 "reset": true, 00:21:32.459 "nvme_admin": false, 00:21:32.459 "nvme_io": false, 00:21:32.459 "nvme_io_md": false, 00:21:32.459 "write_zeroes": true, 00:21:32.459 "zcopy": true, 00:21:32.459 "get_zone_info": false, 00:21:32.459 "zone_management": false, 00:21:32.459 "zone_append": false, 00:21:32.459 "compare": false, 00:21:32.459 "compare_and_write": false, 00:21:32.459 "abort": true, 00:21:32.459 "seek_hole": false, 00:21:32.459 "seek_data": false, 00:21:32.459 "copy": true, 00:21:32.459 "nvme_iov_md": false 00:21:32.459 }, 00:21:32.459 "memory_domains": [ 00:21:32.459 { 00:21:32.459 "dma_device_id": "system", 00:21:32.459 "dma_device_type": 1 00:21:32.459 }, 00:21:32.459 { 00:21:32.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.459 "dma_device_type": 2 00:21:32.459 } 00:21:32.459 ], 00:21:32.459 "driver_specific": {} 00:21:32.459 }' 00:21:32.459 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.459 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.459 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:32.459 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.718 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.718 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:32.718 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.718 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.718 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:32.718 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.976 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.976 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.976 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:32.976 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:32.976 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:33.235 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:33.235 "name": "BaseBdev3", 00:21:33.235 "aliases": [ 00:21:33.235 "7c4d197b-2a56-4a26-9524-3a8bb092c26c" 00:21:33.235 ], 00:21:33.235 "product_name": "Malloc disk", 00:21:33.235 "block_size": 512, 00:21:33.235 "num_blocks": 65536, 00:21:33.235 "uuid": "7c4d197b-2a56-4a26-9524-3a8bb092c26c", 00:21:33.235 "assigned_rate_limits": { 00:21:33.235 "rw_ios_per_sec": 0, 00:21:33.235 "rw_mbytes_per_sec": 0, 00:21:33.235 "r_mbytes_per_sec": 0, 00:21:33.235 "w_mbytes_per_sec": 0 00:21:33.235 }, 00:21:33.235 "claimed": true, 00:21:33.235 "claim_type": "exclusive_write", 00:21:33.235 "zoned": false, 00:21:33.235 "supported_io_types": { 00:21:33.235 "read": true, 00:21:33.235 "write": true, 00:21:33.235 "unmap": true, 00:21:33.235 "flush": true, 00:21:33.235 "reset": true, 00:21:33.235 "nvme_admin": false, 00:21:33.235 "nvme_io": false, 00:21:33.235 "nvme_io_md": false, 00:21:33.235 "write_zeroes": true, 00:21:33.235 "zcopy": true, 00:21:33.235 "get_zone_info": false, 00:21:33.235 "zone_management": false, 00:21:33.235 "zone_append": false, 00:21:33.235 "compare": false, 00:21:33.235 "compare_and_write": false, 00:21:33.235 "abort": true, 00:21:33.235 "seek_hole": false, 00:21:33.235 "seek_data": false, 00:21:33.235 "copy": true, 00:21:33.235 "nvme_iov_md": false 00:21:33.235 }, 00:21:33.235 "memory_domains": [ 00:21:33.235 { 00:21:33.235 "dma_device_id": "system", 00:21:33.235 "dma_device_type": 1 00:21:33.235 }, 00:21:33.235 { 00:21:33.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.235 "dma_device_type": 2 00:21:33.235 } 00:21:33.235 ], 00:21:33.235 "driver_specific": {} 00:21:33.235 }' 00:21:33.235 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.235 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.235 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:33.235 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.493 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.493 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:33.493 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.493 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.493 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.493 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.752 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.752 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:33.752 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:34.010 [2024-07-12 08:48:09.084511] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:34.010 [2024-07-12 08:48:09.084620] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.010 [2024-07-12 08:48:09.084751] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.010 [2024-07-12 08:48:09.084836] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.010 [2024-07-12 08:48:09.084849] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 130208 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 130208 ']' 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 130208 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130208 00:21:34.010 killing process with pid 130208 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:34.010 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:34.011 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130208' 00:21:34.011 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 130208 00:21:34.011 08:48:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 130208 00:21:34.011 [2024-07-12 08:48:09.124637] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.269 [2024-07-12 08:48:09.377820] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.649 ************************************ 00:21:35.649 END TEST raid_state_function_test_sb 00:21:35.649 ************************************ 00:21:35.649 08:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:35.649 00:21:35.649 real 0m34.683s 00:21:35.649 user 1m5.101s 00:21:35.649 sys 0m3.760s 00:21:35.649 08:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.649 08:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.649 08:48:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:35.649 08:48:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:21:35.649 08:48:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:35.649 08:48:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.649 08:48:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.649 ************************************ 00:21:35.649 START TEST raid_superblock_test 00:21:35.649 ************************************ 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=131301 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 131301 /var/tmp/spdk-raid.sock 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 131301 ']' 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.649 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.649 [2024-07-12 08:48:10.672734] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:21:35.649 [2024-07-12 08:48:10.672983] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131301 ] 00:21:35.908 [2024-07-12 08:48:10.848198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.166 [2024-07-12 08:48:11.145966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.425 [2024-07-12 08:48:11.367566] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:36.683 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:36.941 malloc1 00:21:36.941 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.199 [2024-07-12 08:48:12.199173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.199 [2024-07-12 08:48:12.199714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.199 [2024-07-12 08:48:12.199903] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:37.199 [2024-07-12 08:48:12.200054] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.199 [2024-07-12 08:48:12.203370] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.199 [2024-07-12 08:48:12.203545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.199 pt1 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:37.199 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:37.456 malloc2 00:21:37.457 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:37.715 [2024-07-12 08:48:12.743142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:37.715 [2024-07-12 08:48:12.743594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.715 [2024-07-12 08:48:12.743776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:37.715 [2024-07-12 08:48:12.743907] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.715 [2024-07-12 08:48:12.746931] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.715 [2024-07-12 08:48:12.747123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:37.715 pt2 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:37.715 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:37.973 malloc3 00:21:37.973 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:38.232 [2024-07-12 08:48:13.350053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:38.232 [2024-07-12 08:48:13.350512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.232 [2024-07-12 08:48:13.350669] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:38.232 [2024-07-12 08:48:13.350809] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.232 [2024-07-12 08:48:13.353924] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.232 [2024-07-12 08:48:13.354110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:38.232 pt3 00:21:38.232 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:38.232 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:38.232 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:38.491 [2024-07-12 08:48:13.630674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:38.491 [2024-07-12 08:48:13.633159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.491 [2024-07-12 08:48:13.633366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:38.491 [2024-07-12 08:48:13.633625] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:38.491 [2024-07-12 08:48:13.633763] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:38.491 [2024-07-12 08:48:13.633969] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:38.491 [2024-07-12 08:48:13.634510] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:38.491 [2024-07-12 08:48:13.634634] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:38.491 [2024-07-12 08:48:13.634958] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.491 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.750 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.750 "name": "raid_bdev1", 00:21:38.750 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:38.750 "strip_size_kb": 64, 00:21:38.750 "state": "online", 00:21:38.750 "raid_level": "concat", 00:21:38.750 "superblock": true, 00:21:38.750 "num_base_bdevs": 3, 00:21:38.750 "num_base_bdevs_discovered": 3, 00:21:38.750 "num_base_bdevs_operational": 3, 00:21:38.750 "base_bdevs_list": [ 00:21:38.750 { 00:21:38.750 "name": "pt1", 00:21:38.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.750 "is_configured": true, 00:21:38.750 "data_offset": 2048, 00:21:38.750 "data_size": 63488 00:21:38.750 }, 00:21:38.750 { 00:21:38.750 "name": "pt2", 00:21:38.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.750 "is_configured": true, 00:21:38.750 "data_offset": 2048, 00:21:38.750 "data_size": 63488 00:21:38.750 }, 00:21:38.750 { 00:21:38.750 "name": "pt3", 00:21:38.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.750 "is_configured": true, 00:21:38.750 "data_offset": 2048, 00:21:38.750 "data_size": 63488 00:21:38.750 } 00:21:38.750 ] 00:21:38.750 }' 00:21:38.750 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.750 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:39.684 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:39.943 [2024-07-12 08:48:14.895799] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.943 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:39.943 "name": "raid_bdev1", 00:21:39.943 "aliases": [ 00:21:39.943 "bdc0978d-f882-40b9-8436-0ac3e7d10ac0" 00:21:39.943 ], 00:21:39.943 "product_name": "Raid Volume", 00:21:39.943 "block_size": 512, 00:21:39.943 "num_blocks": 190464, 00:21:39.943 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:39.943 "assigned_rate_limits": { 00:21:39.943 "rw_ios_per_sec": 0, 00:21:39.943 "rw_mbytes_per_sec": 0, 00:21:39.943 "r_mbytes_per_sec": 0, 00:21:39.943 "w_mbytes_per_sec": 0 00:21:39.943 }, 00:21:39.943 "claimed": false, 00:21:39.943 "zoned": false, 00:21:39.943 "supported_io_types": { 00:21:39.943 "read": true, 00:21:39.943 "write": true, 00:21:39.943 "unmap": true, 00:21:39.943 "flush": true, 00:21:39.943 "reset": true, 00:21:39.943 "nvme_admin": false, 00:21:39.943 "nvme_io": false, 00:21:39.943 "nvme_io_md": false, 00:21:39.943 "write_zeroes": true, 00:21:39.943 "zcopy": false, 00:21:39.943 "get_zone_info": false, 00:21:39.943 "zone_management": false, 00:21:39.943 "zone_append": false, 00:21:39.943 "compare": false, 00:21:39.943 "compare_and_write": false, 00:21:39.943 "abort": false, 00:21:39.943 "seek_hole": false, 00:21:39.943 "seek_data": false, 00:21:39.943 "copy": false, 00:21:39.943 "nvme_iov_md": false 00:21:39.943 }, 00:21:39.943 "memory_domains": [ 00:21:39.943 { 00:21:39.943 "dma_device_id": "system", 00:21:39.943 "dma_device_type": 1 00:21:39.943 }, 00:21:39.943 { 00:21:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.943 "dma_device_type": 2 00:21:39.943 }, 00:21:39.943 { 00:21:39.943 "dma_device_id": "system", 00:21:39.943 "dma_device_type": 1 00:21:39.943 }, 00:21:39.943 { 00:21:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.943 "dma_device_type": 2 00:21:39.943 }, 00:21:39.943 { 00:21:39.943 "dma_device_id": "system", 00:21:39.943 "dma_device_type": 1 00:21:39.943 }, 00:21:39.943 { 00:21:39.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.943 "dma_device_type": 2 00:21:39.943 } 00:21:39.943 ], 00:21:39.943 "driver_specific": { 00:21:39.943 "raid": { 00:21:39.943 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:39.943 "strip_size_kb": 64, 00:21:39.943 "state": "online", 00:21:39.943 "raid_level": "concat", 00:21:39.943 "superblock": true, 00:21:39.943 "num_base_bdevs": 3, 00:21:39.943 "num_base_bdevs_discovered": 3, 00:21:39.943 "num_base_bdevs_operational": 3, 00:21:39.943 "base_bdevs_list": [ 00:21:39.943 { 00:21:39.943 "name": "pt1", 00:21:39.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.943 "is_configured": true, 00:21:39.943 "data_offset": 2048, 00:21:39.943 "data_size": 63488 00:21:39.943 }, 00:21:39.943 { 00:21:39.943 "name": "pt2", 00:21:39.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.943 "is_configured": true, 00:21:39.943 "data_offset": 2048, 00:21:39.943 "data_size": 63488 00:21:39.943 }, 00:21:39.943 { 00:21:39.943 "name": "pt3", 00:21:39.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.943 "is_configured": true, 00:21:39.943 "data_offset": 2048, 00:21:39.943 "data_size": 63488 00:21:39.943 } 00:21:39.943 ] 00:21:39.943 } 00:21:39.943 } 00:21:39.943 }' 00:21:39.943 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.943 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:39.943 pt2 00:21:39.943 pt3' 00:21:39.943 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:39.943 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:39.943 08:48:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:40.202 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:40.202 "name": "pt1", 00:21:40.202 "aliases": [ 00:21:40.202 "00000000-0000-0000-0000-000000000001" 00:21:40.202 ], 00:21:40.202 "product_name": "passthru", 00:21:40.202 "block_size": 512, 00:21:40.202 "num_blocks": 65536, 00:21:40.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.202 "assigned_rate_limits": { 00:21:40.202 "rw_ios_per_sec": 0, 00:21:40.202 "rw_mbytes_per_sec": 0, 00:21:40.202 "r_mbytes_per_sec": 0, 00:21:40.202 "w_mbytes_per_sec": 0 00:21:40.202 }, 00:21:40.202 "claimed": true, 00:21:40.202 "claim_type": "exclusive_write", 00:21:40.202 "zoned": false, 00:21:40.202 "supported_io_types": { 00:21:40.202 "read": true, 00:21:40.202 "write": true, 00:21:40.202 "unmap": true, 00:21:40.202 "flush": true, 00:21:40.202 "reset": true, 00:21:40.202 "nvme_admin": false, 00:21:40.202 "nvme_io": false, 00:21:40.202 "nvme_io_md": false, 00:21:40.202 "write_zeroes": true, 00:21:40.202 "zcopy": true, 00:21:40.202 "get_zone_info": false, 00:21:40.202 "zone_management": false, 00:21:40.202 "zone_append": false, 00:21:40.202 "compare": false, 00:21:40.202 "compare_and_write": false, 00:21:40.202 "abort": true, 00:21:40.202 "seek_hole": false, 00:21:40.203 "seek_data": false, 00:21:40.203 "copy": true, 00:21:40.203 "nvme_iov_md": false 00:21:40.203 }, 00:21:40.203 "memory_domains": [ 00:21:40.203 { 00:21:40.203 "dma_device_id": "system", 00:21:40.203 "dma_device_type": 1 00:21:40.203 }, 00:21:40.203 { 00:21:40.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.203 "dma_device_type": 2 00:21:40.203 } 00:21:40.203 ], 00:21:40.203 "driver_specific": { 00:21:40.203 "passthru": { 00:21:40.203 "name": "pt1", 00:21:40.203 "base_bdev_name": "malloc1" 00:21:40.203 } 00:21:40.203 } 00:21:40.203 }' 00:21:40.203 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.203 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.203 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:40.203 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.461 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.461 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:40.461 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.461 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.461 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:40.461 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.719 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.719 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:40.719 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:40.720 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:40.720 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:40.978 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:40.978 "name": "pt2", 00:21:40.978 "aliases": [ 00:21:40.978 "00000000-0000-0000-0000-000000000002" 00:21:40.978 ], 00:21:40.978 "product_name": "passthru", 00:21:40.978 "block_size": 512, 00:21:40.978 "num_blocks": 65536, 00:21:40.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.978 "assigned_rate_limits": { 00:21:40.978 "rw_ios_per_sec": 0, 00:21:40.978 "rw_mbytes_per_sec": 0, 00:21:40.978 "r_mbytes_per_sec": 0, 00:21:40.978 "w_mbytes_per_sec": 0 00:21:40.978 }, 00:21:40.978 "claimed": true, 00:21:40.978 "claim_type": "exclusive_write", 00:21:40.978 "zoned": false, 00:21:40.978 "supported_io_types": { 00:21:40.978 "read": true, 00:21:40.978 "write": true, 00:21:40.978 "unmap": true, 00:21:40.978 "flush": true, 00:21:40.978 "reset": true, 00:21:40.978 "nvme_admin": false, 00:21:40.978 "nvme_io": false, 00:21:40.978 "nvme_io_md": false, 00:21:40.978 "write_zeroes": true, 00:21:40.978 "zcopy": true, 00:21:40.978 "get_zone_info": false, 00:21:40.978 "zone_management": false, 00:21:40.978 "zone_append": false, 00:21:40.978 "compare": false, 00:21:40.978 "compare_and_write": false, 00:21:40.978 "abort": true, 00:21:40.978 "seek_hole": false, 00:21:40.978 "seek_data": false, 00:21:40.978 "copy": true, 00:21:40.978 "nvme_iov_md": false 00:21:40.978 }, 00:21:40.978 "memory_domains": [ 00:21:40.978 { 00:21:40.978 "dma_device_id": "system", 00:21:40.978 "dma_device_type": 1 00:21:40.978 }, 00:21:40.978 { 00:21:40.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.978 "dma_device_type": 2 00:21:40.978 } 00:21:40.978 ], 00:21:40.978 "driver_specific": { 00:21:40.978 "passthru": { 00:21:40.978 "name": "pt2", 00:21:40.978 "base_bdev_name": "malloc2" 00:21:40.978 } 00:21:40.978 } 00:21:40.978 }' 00:21:40.978 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.978 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.978 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:40.978 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.236 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.236 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:41.236 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.236 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.236 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:41.236 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.236 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.494 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:41.494 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:41.494 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:41.494 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.753 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.753 "name": "pt3", 00:21:41.753 "aliases": [ 00:21:41.753 "00000000-0000-0000-0000-000000000003" 00:21:41.753 ], 00:21:41.753 "product_name": "passthru", 00:21:41.753 "block_size": 512, 00:21:41.753 "num_blocks": 65536, 00:21:41.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.753 "assigned_rate_limits": { 00:21:41.753 "rw_ios_per_sec": 0, 00:21:41.753 "rw_mbytes_per_sec": 0, 00:21:41.753 "r_mbytes_per_sec": 0, 00:21:41.753 "w_mbytes_per_sec": 0 00:21:41.753 }, 00:21:41.753 "claimed": true, 00:21:41.753 "claim_type": "exclusive_write", 00:21:41.753 "zoned": false, 00:21:41.753 "supported_io_types": { 00:21:41.753 "read": true, 00:21:41.753 "write": true, 00:21:41.753 "unmap": true, 00:21:41.753 "flush": true, 00:21:41.753 "reset": true, 00:21:41.753 "nvme_admin": false, 00:21:41.753 "nvme_io": false, 00:21:41.753 "nvme_io_md": false, 00:21:41.753 "write_zeroes": true, 00:21:41.753 "zcopy": true, 00:21:41.753 "get_zone_info": false, 00:21:41.753 "zone_management": false, 00:21:41.753 "zone_append": false, 00:21:41.753 "compare": false, 00:21:41.753 "compare_and_write": false, 00:21:41.753 "abort": true, 00:21:41.753 "seek_hole": false, 00:21:41.753 "seek_data": false, 00:21:41.753 "copy": true, 00:21:41.753 "nvme_iov_md": false 00:21:41.753 }, 00:21:41.753 "memory_domains": [ 00:21:41.753 { 00:21:41.753 "dma_device_id": "system", 00:21:41.753 "dma_device_type": 1 00:21:41.753 }, 00:21:41.753 { 00:21:41.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.753 "dma_device_type": 2 00:21:41.753 } 00:21:41.753 ], 00:21:41.753 "driver_specific": { 00:21:41.753 "passthru": { 00:21:41.753 "name": "pt3", 00:21:41.753 "base_bdev_name": "malloc3" 00:21:41.753 } 00:21:41.753 } 00:21:41.753 }' 00:21:41.753 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.753 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.753 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.753 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.753 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.012 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:42.012 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.012 08:48:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.012 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.012 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.012 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.012 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.012 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:42.012 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:42.270 [2024-07-12 08:48:17.424223] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.270 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bdc0978d-f882-40b9-8436-0ac3e7d10ac0 00:21:42.270 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z bdc0978d-f882-40b9-8436-0ac3e7d10ac0 ']' 00:21:42.270 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:42.528 [2024-07-12 08:48:17.715970] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.528 [2024-07-12 08:48:17.716022] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.528 [2024-07-12 08:48:17.716156] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.528 [2024-07-12 08:48:17.716254] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.528 [2024-07-12 08:48:17.716267] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:42.786 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.786 08:48:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:43.044 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:43.044 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:43.044 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.044 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:43.307 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.307 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:43.590 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.590 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:43.849 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:43.849 08:48:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:44.107 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:44.365 [2024-07-12 08:48:19.426723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:44.365 [2024-07-12 08:48:19.429419] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:44.365 [2024-07-12 08:48:19.429518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:44.365 [2024-07-12 08:48:19.429599] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:44.365 [2024-07-12 08:48:19.429783] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:44.365 [2024-07-12 08:48:19.429851] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:44.365 [2024-07-12 08:48:19.429883] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.365 [2024-07-12 08:48:19.429949] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:21:44.365 request: 00:21:44.365 { 00:21:44.365 "name": "raid_bdev1", 00:21:44.365 "raid_level": "concat", 00:21:44.365 "base_bdevs": [ 00:21:44.365 "malloc1", 00:21:44.365 "malloc2", 00:21:44.365 "malloc3" 00:21:44.365 ], 00:21:44.365 "strip_size_kb": 64, 00:21:44.365 "superblock": false, 00:21:44.365 "method": "bdev_raid_create", 00:21:44.365 "req_id": 1 00:21:44.365 } 00:21:44.365 Got JSON-RPC error response 00:21:44.365 response: 00:21:44.365 { 00:21:44.365 "code": -17, 00:21:44.365 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:44.365 } 00:21:44.365 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:44.365 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.365 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.365 08:48:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.365 08:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.365 08:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:44.623 08:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:44.623 08:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:44.623 08:48:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:44.882 [2024-07-12 08:48:20.002805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:44.882 [2024-07-12 08:48:20.002996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.882 [2024-07-12 08:48:20.003060] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:44.882 [2024-07-12 08:48:20.003085] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.882 [2024-07-12 08:48:20.006176] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.882 [2024-07-12 08:48:20.006244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:44.882 [2024-07-12 08:48:20.006476] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:44.882 [2024-07-12 08:48:20.006558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:44.882 pt1 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.882 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.140 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.140 "name": "raid_bdev1", 00:21:45.140 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:45.140 "strip_size_kb": 64, 00:21:45.140 "state": "configuring", 00:21:45.140 "raid_level": "concat", 00:21:45.140 "superblock": true, 00:21:45.140 "num_base_bdevs": 3, 00:21:45.140 "num_base_bdevs_discovered": 1, 00:21:45.140 "num_base_bdevs_operational": 3, 00:21:45.140 "base_bdevs_list": [ 00:21:45.140 { 00:21:45.140 "name": "pt1", 00:21:45.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:45.140 "is_configured": true, 00:21:45.140 "data_offset": 2048, 00:21:45.141 "data_size": 63488 00:21:45.141 }, 00:21:45.141 { 00:21:45.141 "name": null, 00:21:45.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.141 "is_configured": false, 00:21:45.141 "data_offset": 2048, 00:21:45.141 "data_size": 63488 00:21:45.141 }, 00:21:45.141 { 00:21:45.141 "name": null, 00:21:45.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:45.141 "is_configured": false, 00:21:45.141 "data_offset": 2048, 00:21:45.141 "data_size": 63488 00:21:45.141 } 00:21:45.141 ] 00:21:45.141 }' 00:21:45.141 08:48:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.141 08:48:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.076 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:46.076 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:46.334 [2024-07-12 08:48:21.275209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:46.334 [2024-07-12 08:48:21.275333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.334 [2024-07-12 08:48:21.275384] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:46.334 [2024-07-12 08:48:21.275409] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.334 [2024-07-12 08:48:21.276069] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.334 [2024-07-12 08:48:21.276108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:46.334 [2024-07-12 08:48:21.276250] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:46.334 [2024-07-12 08:48:21.276288] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:46.334 pt2 00:21:46.334 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:46.334 [2024-07-12 08:48:21.527342] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.593 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.852 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.852 "name": "raid_bdev1", 00:21:46.852 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:46.852 "strip_size_kb": 64, 00:21:46.852 "state": "configuring", 00:21:46.852 "raid_level": "concat", 00:21:46.852 "superblock": true, 00:21:46.852 "num_base_bdevs": 3, 00:21:46.852 "num_base_bdevs_discovered": 1, 00:21:46.852 "num_base_bdevs_operational": 3, 00:21:46.852 "base_bdevs_list": [ 00:21:46.852 { 00:21:46.852 "name": "pt1", 00:21:46.852 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:46.852 "is_configured": true, 00:21:46.852 "data_offset": 2048, 00:21:46.852 "data_size": 63488 00:21:46.852 }, 00:21:46.852 { 00:21:46.852 "name": null, 00:21:46.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.852 "is_configured": false, 00:21:46.852 "data_offset": 2048, 00:21:46.852 "data_size": 63488 00:21:46.852 }, 00:21:46.852 { 00:21:46.852 "name": null, 00:21:46.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:46.852 "is_configured": false, 00:21:46.852 "data_offset": 2048, 00:21:46.852 "data_size": 63488 00:21:46.852 } 00:21:46.852 ] 00:21:46.852 }' 00:21:46.852 08:48:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.852 08:48:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.420 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:47.420 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:47.420 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:47.679 [2024-07-12 08:48:22.687543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:47.679 [2024-07-12 08:48:22.687689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.679 [2024-07-12 08:48:22.687755] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:47.679 [2024-07-12 08:48:22.687788] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.679 [2024-07-12 08:48:22.688474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.679 [2024-07-12 08:48:22.688513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:47.679 [2024-07-12 08:48:22.688641] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:47.679 [2024-07-12 08:48:22.688673] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:47.679 pt2 00:21:47.679 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:47.679 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:47.679 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:47.938 [2024-07-12 08:48:22.955649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:47.938 [2024-07-12 08:48:22.955819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.938 [2024-07-12 08:48:22.955864] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:47.938 [2024-07-12 08:48:22.955895] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.938 [2024-07-12 08:48:22.956693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.938 [2024-07-12 08:48:22.956738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:47.938 [2024-07-12 08:48:22.956888] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:47.938 [2024-07-12 08:48:22.956929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:47.938 [2024-07-12 08:48:22.957107] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:21:47.938 [2024-07-12 08:48:22.957121] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:47.938 [2024-07-12 08:48:22.957226] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:47.938 [2024-07-12 08:48:22.957631] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:21:47.938 [2024-07-12 08:48:22.957646] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:21:47.938 [2024-07-12 08:48:22.957810] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.938 pt3 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.938 08:48:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.196 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.196 "name": "raid_bdev1", 00:21:48.196 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:48.196 "strip_size_kb": 64, 00:21:48.196 "state": "online", 00:21:48.196 "raid_level": "concat", 00:21:48.196 "superblock": true, 00:21:48.196 "num_base_bdevs": 3, 00:21:48.196 "num_base_bdevs_discovered": 3, 00:21:48.196 "num_base_bdevs_operational": 3, 00:21:48.196 "base_bdevs_list": [ 00:21:48.196 { 00:21:48.196 "name": "pt1", 00:21:48.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.196 "is_configured": true, 00:21:48.196 "data_offset": 2048, 00:21:48.196 "data_size": 63488 00:21:48.196 }, 00:21:48.196 { 00:21:48.196 "name": "pt2", 00:21:48.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.196 "is_configured": true, 00:21:48.196 "data_offset": 2048, 00:21:48.196 "data_size": 63488 00:21:48.196 }, 00:21:48.196 { 00:21:48.196 "name": "pt3", 00:21:48.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:48.196 "is_configured": true, 00:21:48.196 "data_offset": 2048, 00:21:48.196 "data_size": 63488 00:21:48.196 } 00:21:48.196 ] 00:21:48.196 }' 00:21:48.196 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.196 08:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:48.763 08:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:49.022 [2024-07-12 08:48:24.164302] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.022 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:49.022 "name": "raid_bdev1", 00:21:49.022 "aliases": [ 00:21:49.022 "bdc0978d-f882-40b9-8436-0ac3e7d10ac0" 00:21:49.022 ], 00:21:49.022 "product_name": "Raid Volume", 00:21:49.022 "block_size": 512, 00:21:49.022 "num_blocks": 190464, 00:21:49.022 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:49.022 "assigned_rate_limits": { 00:21:49.022 "rw_ios_per_sec": 0, 00:21:49.022 "rw_mbytes_per_sec": 0, 00:21:49.022 "r_mbytes_per_sec": 0, 00:21:49.022 "w_mbytes_per_sec": 0 00:21:49.022 }, 00:21:49.022 "claimed": false, 00:21:49.022 "zoned": false, 00:21:49.022 "supported_io_types": { 00:21:49.022 "read": true, 00:21:49.022 "write": true, 00:21:49.022 "unmap": true, 00:21:49.022 "flush": true, 00:21:49.022 "reset": true, 00:21:49.022 "nvme_admin": false, 00:21:49.022 "nvme_io": false, 00:21:49.022 "nvme_io_md": false, 00:21:49.022 "write_zeroes": true, 00:21:49.022 "zcopy": false, 00:21:49.022 "get_zone_info": false, 00:21:49.022 "zone_management": false, 00:21:49.022 "zone_append": false, 00:21:49.022 "compare": false, 00:21:49.022 "compare_and_write": false, 00:21:49.022 "abort": false, 00:21:49.022 "seek_hole": false, 00:21:49.022 "seek_data": false, 00:21:49.022 "copy": false, 00:21:49.022 "nvme_iov_md": false 00:21:49.022 }, 00:21:49.022 "memory_domains": [ 00:21:49.022 { 00:21:49.022 "dma_device_id": "system", 00:21:49.022 "dma_device_type": 1 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.022 "dma_device_type": 2 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "dma_device_id": "system", 00:21:49.022 "dma_device_type": 1 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.022 "dma_device_type": 2 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "dma_device_id": "system", 00:21:49.022 "dma_device_type": 1 00:21:49.022 }, 00:21:49.022 { 00:21:49.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.022 "dma_device_type": 2 00:21:49.022 } 00:21:49.022 ], 00:21:49.022 "driver_specific": { 00:21:49.022 "raid": { 00:21:49.022 "uuid": "bdc0978d-f882-40b9-8436-0ac3e7d10ac0", 00:21:49.023 "strip_size_kb": 64, 00:21:49.023 "state": "online", 00:21:49.023 "raid_level": "concat", 00:21:49.023 "superblock": true, 00:21:49.023 "num_base_bdevs": 3, 00:21:49.023 "num_base_bdevs_discovered": 3, 00:21:49.023 "num_base_bdevs_operational": 3, 00:21:49.023 "base_bdevs_list": [ 00:21:49.023 { 00:21:49.023 "name": "pt1", 00:21:49.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:49.023 "is_configured": true, 00:21:49.023 "data_offset": 2048, 00:21:49.023 "data_size": 63488 00:21:49.023 }, 00:21:49.023 { 00:21:49.023 "name": "pt2", 00:21:49.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:49.023 "is_configured": true, 00:21:49.023 "data_offset": 2048, 00:21:49.023 "data_size": 63488 00:21:49.023 }, 00:21:49.023 { 00:21:49.023 "name": "pt3", 00:21:49.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:49.023 "is_configured": true, 00:21:49.023 "data_offset": 2048, 00:21:49.023 "data_size": 63488 00:21:49.023 } 00:21:49.023 ] 00:21:49.023 } 00:21:49.023 } 00:21:49.023 }' 00:21:49.023 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:49.281 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:49.281 pt2 00:21:49.281 pt3' 00:21:49.281 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.281 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:49.281 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:49.539 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:49.539 "name": "pt1", 00:21:49.539 "aliases": [ 00:21:49.539 "00000000-0000-0000-0000-000000000001" 00:21:49.539 ], 00:21:49.539 "product_name": "passthru", 00:21:49.539 "block_size": 512, 00:21:49.539 "num_blocks": 65536, 00:21:49.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:49.539 "assigned_rate_limits": { 00:21:49.539 "rw_ios_per_sec": 0, 00:21:49.540 "rw_mbytes_per_sec": 0, 00:21:49.540 "r_mbytes_per_sec": 0, 00:21:49.540 "w_mbytes_per_sec": 0 00:21:49.540 }, 00:21:49.540 "claimed": true, 00:21:49.540 "claim_type": "exclusive_write", 00:21:49.540 "zoned": false, 00:21:49.540 "supported_io_types": { 00:21:49.540 "read": true, 00:21:49.540 "write": true, 00:21:49.540 "unmap": true, 00:21:49.540 "flush": true, 00:21:49.540 "reset": true, 00:21:49.540 "nvme_admin": false, 00:21:49.540 "nvme_io": false, 00:21:49.540 "nvme_io_md": false, 00:21:49.540 "write_zeroes": true, 00:21:49.540 "zcopy": true, 00:21:49.540 "get_zone_info": false, 00:21:49.540 "zone_management": false, 00:21:49.540 "zone_append": false, 00:21:49.540 "compare": false, 00:21:49.540 "compare_and_write": false, 00:21:49.540 "abort": true, 00:21:49.540 "seek_hole": false, 00:21:49.540 "seek_data": false, 00:21:49.540 "copy": true, 00:21:49.540 "nvme_iov_md": false 00:21:49.540 }, 00:21:49.540 "memory_domains": [ 00:21:49.540 { 00:21:49.540 "dma_device_id": "system", 00:21:49.540 "dma_device_type": 1 00:21:49.540 }, 00:21:49.540 { 00:21:49.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.540 "dma_device_type": 2 00:21:49.540 } 00:21:49.540 ], 00:21:49.540 "driver_specific": { 00:21:49.540 "passthru": { 00:21:49.540 "name": "pt1", 00:21:49.540 "base_bdev_name": "malloc1" 00:21:49.540 } 00:21:49.540 } 00:21:49.540 }' 00:21:49.540 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.540 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.540 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:49.540 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.540 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.540 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:49.540 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:49.798 08:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:50.060 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:50.060 "name": "pt2", 00:21:50.060 "aliases": [ 00:21:50.060 "00000000-0000-0000-0000-000000000002" 00:21:50.060 ], 00:21:50.060 "product_name": "passthru", 00:21:50.060 "block_size": 512, 00:21:50.060 "num_blocks": 65536, 00:21:50.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:50.060 "assigned_rate_limits": { 00:21:50.060 "rw_ios_per_sec": 0, 00:21:50.060 "rw_mbytes_per_sec": 0, 00:21:50.060 "r_mbytes_per_sec": 0, 00:21:50.060 "w_mbytes_per_sec": 0 00:21:50.060 }, 00:21:50.060 "claimed": true, 00:21:50.060 "claim_type": "exclusive_write", 00:21:50.060 "zoned": false, 00:21:50.060 "supported_io_types": { 00:21:50.060 "read": true, 00:21:50.060 "write": true, 00:21:50.060 "unmap": true, 00:21:50.060 "flush": true, 00:21:50.060 "reset": true, 00:21:50.061 "nvme_admin": false, 00:21:50.061 "nvme_io": false, 00:21:50.061 "nvme_io_md": false, 00:21:50.061 "write_zeroes": true, 00:21:50.061 "zcopy": true, 00:21:50.061 "get_zone_info": false, 00:21:50.061 "zone_management": false, 00:21:50.061 "zone_append": false, 00:21:50.061 "compare": false, 00:21:50.061 "compare_and_write": false, 00:21:50.061 "abort": true, 00:21:50.061 "seek_hole": false, 00:21:50.061 "seek_data": false, 00:21:50.061 "copy": true, 00:21:50.061 "nvme_iov_md": false 00:21:50.061 }, 00:21:50.061 "memory_domains": [ 00:21:50.061 { 00:21:50.061 "dma_device_id": "system", 00:21:50.061 "dma_device_type": 1 00:21:50.061 }, 00:21:50.061 { 00:21:50.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.061 "dma_device_type": 2 00:21:50.061 } 00:21:50.061 ], 00:21:50.061 "driver_specific": { 00:21:50.061 "passthru": { 00:21:50.061 "name": "pt2", 00:21:50.061 "base_bdev_name": "malloc2" 00:21:50.061 } 00:21:50.061 } 00:21:50.061 }' 00:21:50.061 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.061 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:50.319 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.578 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.578 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:50.578 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:50.578 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:50.578 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:50.836 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:50.836 "name": "pt3", 00:21:50.836 "aliases": [ 00:21:50.836 "00000000-0000-0000-0000-000000000003" 00:21:50.836 ], 00:21:50.836 "product_name": "passthru", 00:21:50.836 "block_size": 512, 00:21:50.836 "num_blocks": 65536, 00:21:50.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:50.836 "assigned_rate_limits": { 00:21:50.836 "rw_ios_per_sec": 0, 00:21:50.836 "rw_mbytes_per_sec": 0, 00:21:50.836 "r_mbytes_per_sec": 0, 00:21:50.836 "w_mbytes_per_sec": 0 00:21:50.836 }, 00:21:50.836 "claimed": true, 00:21:50.836 "claim_type": "exclusive_write", 00:21:50.836 "zoned": false, 00:21:50.836 "supported_io_types": { 00:21:50.836 "read": true, 00:21:50.836 "write": true, 00:21:50.836 "unmap": true, 00:21:50.836 "flush": true, 00:21:50.836 "reset": true, 00:21:50.836 "nvme_admin": false, 00:21:50.836 "nvme_io": false, 00:21:50.836 "nvme_io_md": false, 00:21:50.836 "write_zeroes": true, 00:21:50.836 "zcopy": true, 00:21:50.836 "get_zone_info": false, 00:21:50.836 "zone_management": false, 00:21:50.836 "zone_append": false, 00:21:50.836 "compare": false, 00:21:50.836 "compare_and_write": false, 00:21:50.836 "abort": true, 00:21:50.836 "seek_hole": false, 00:21:50.836 "seek_data": false, 00:21:50.836 "copy": true, 00:21:50.836 "nvme_iov_md": false 00:21:50.836 }, 00:21:50.836 "memory_domains": [ 00:21:50.836 { 00:21:50.836 "dma_device_id": "system", 00:21:50.836 "dma_device_type": 1 00:21:50.836 }, 00:21:50.836 { 00:21:50.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.836 "dma_device_type": 2 00:21:50.836 } 00:21:50.836 ], 00:21:50.836 "driver_specific": { 00:21:50.836 "passthru": { 00:21:50.836 "name": "pt3", 00:21:50.836 "base_bdev_name": "malloc3" 00:21:50.836 } 00:21:50.836 } 00:21:50.836 }' 00:21:50.836 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.836 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.836 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:50.836 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.836 08:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.094 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.094 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.094 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.094 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.094 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.094 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:51.353 [2024-07-12 08:48:26.496986] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' bdc0978d-f882-40b9-8436-0ac3e7d10ac0 '!=' bdc0978d-f882-40b9-8436-0ac3e7d10ac0 ']' 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 131301 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 131301 ']' 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 131301 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131301 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131301' 00:21:51.353 killing process with pid 131301 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 131301 00:21:51.353 08:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 131301 00:21:51.353 [2024-07-12 08:48:26.535411] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:51.353 [2024-07-12 08:48:26.535507] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.353 [2024-07-12 08:48:26.535579] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.353 [2024-07-12 08:48:26.535597] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:21:51.612 [2024-07-12 08:48:26.779690] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.988 ************************************ 00:21:52.988 END TEST raid_superblock_test 00:21:52.988 ************************************ 00:21:52.988 08:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:52.988 00:21:52.988 real 0m17.394s 00:21:52.988 user 0m31.408s 00:21:52.988 sys 0m2.043s 00:21:52.988 08:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.988 08:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.988 08:48:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:52.988 08:48:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:21:52.988 08:48:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:52.988 08:48:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.988 08:48:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:52.988 ************************************ 00:21:52.988 START TEST raid_read_error_test 00:21:52.988 ************************************ 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:52.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.j0NosrnqMu 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=131829 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 131829 /var/tmp/spdk-raid.sock 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 131829 ']' 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.988 08:48:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.988 [2024-07-12 08:48:28.142994] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:21:52.988 [2024-07-12 08:48:28.143211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131829 ] 00:21:53.249 [2024-07-12 08:48:28.321897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.506 [2024-07-12 08:48:28.614285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.763 [2024-07-12 08:48:28.852089] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.022 08:48:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.022 08:48:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:54.022 08:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:54.022 08:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:54.588 BaseBdev1_malloc 00:21:54.588 08:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:54.846 true 00:21:54.846 08:48:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:55.103 [2024-07-12 08:48:30.120556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:55.103 [2024-07-12 08:48:30.120687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.103 [2024-07-12 08:48:30.120736] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:55.103 [2024-07-12 08:48:30.120759] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.103 [2024-07-12 08:48:30.123179] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.103 [2024-07-12 08:48:30.123232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:55.103 BaseBdev1 00:21:55.103 08:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:55.103 08:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:55.361 BaseBdev2_malloc 00:21:55.361 08:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:55.619 true 00:21:55.619 08:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:55.877 [2024-07-12 08:48:30.982689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:55.877 [2024-07-12 08:48:30.982828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.877 [2024-07-12 08:48:30.982875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:55.877 [2024-07-12 08:48:30.982897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.877 [2024-07-12 08:48:30.985162] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.877 [2024-07-12 08:48:30.985208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:55.877 BaseBdev2 00:21:55.877 08:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:55.877 08:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:56.135 BaseBdev3_malloc 00:21:56.135 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:56.393 true 00:21:56.393 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:56.651 [2024-07-12 08:48:31.676448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:56.651 [2024-07-12 08:48:31.677387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.651 [2024-07-12 08:48:31.677682] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:56.651 [2024-07-12 08:48:31.677904] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.651 [2024-07-12 08:48:31.682402] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.651 [2024-07-12 08:48:31.682678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:56.651 BaseBdev3 00:21:56.651 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:56.909 [2024-07-12 08:48:31.911242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.909 [2024-07-12 08:48:31.913724] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.909 [2024-07-12 08:48:31.913975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:56.910 [2024-07-12 08:48:31.914382] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:56.910 [2024-07-12 08:48:31.914511] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:56.910 [2024-07-12 08:48:31.914711] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:56.910 [2024-07-12 08:48:31.915279] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:56.910 [2024-07-12 08:48:31.915407] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:56.910 [2024-07-12 08:48:31.915739] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.910 08:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.168 08:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.168 "name": "raid_bdev1", 00:21:57.168 "uuid": "2f407c9c-2b3f-4b41-99c5-f5dc27900e95", 00:21:57.168 "strip_size_kb": 64, 00:21:57.168 "state": "online", 00:21:57.168 "raid_level": "concat", 00:21:57.168 "superblock": true, 00:21:57.168 "num_base_bdevs": 3, 00:21:57.168 "num_base_bdevs_discovered": 3, 00:21:57.168 "num_base_bdevs_operational": 3, 00:21:57.168 "base_bdevs_list": [ 00:21:57.168 { 00:21:57.168 "name": "BaseBdev1", 00:21:57.168 "uuid": "ed4083be-d526-54c9-92f2-030a98a98b63", 00:21:57.168 "is_configured": true, 00:21:57.168 "data_offset": 2048, 00:21:57.168 "data_size": 63488 00:21:57.168 }, 00:21:57.168 { 00:21:57.168 "name": "BaseBdev2", 00:21:57.168 "uuid": "ee8a5165-0e09-544a-8dff-b9f62c4dd040", 00:21:57.168 "is_configured": true, 00:21:57.168 "data_offset": 2048, 00:21:57.168 "data_size": 63488 00:21:57.168 }, 00:21:57.168 { 00:21:57.168 "name": "BaseBdev3", 00:21:57.168 "uuid": "b30d1f19-3c89-5699-93a8-5c56a9840aba", 00:21:57.168 "is_configured": true, 00:21:57.168 "data_offset": 2048, 00:21:57.168 "data_size": 63488 00:21:57.168 } 00:21:57.168 ] 00:21:57.168 }' 00:21:57.168 08:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.168 08:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.737 08:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:57.737 08:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:57.994 [2024-07-12 08:48:32.973270] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:58.967 08:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.224 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.480 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.480 "name": "raid_bdev1", 00:21:59.480 "uuid": "2f407c9c-2b3f-4b41-99c5-f5dc27900e95", 00:21:59.480 "strip_size_kb": 64, 00:21:59.480 "state": "online", 00:21:59.480 "raid_level": "concat", 00:21:59.480 "superblock": true, 00:21:59.480 "num_base_bdevs": 3, 00:21:59.480 "num_base_bdevs_discovered": 3, 00:21:59.480 "num_base_bdevs_operational": 3, 00:21:59.480 "base_bdevs_list": [ 00:21:59.480 { 00:21:59.480 "name": "BaseBdev1", 00:21:59.480 "uuid": "ed4083be-d526-54c9-92f2-030a98a98b63", 00:21:59.480 "is_configured": true, 00:21:59.480 "data_offset": 2048, 00:21:59.480 "data_size": 63488 00:21:59.480 }, 00:21:59.480 { 00:21:59.480 "name": "BaseBdev2", 00:21:59.480 "uuid": "ee8a5165-0e09-544a-8dff-b9f62c4dd040", 00:21:59.480 "is_configured": true, 00:21:59.480 "data_offset": 2048, 00:21:59.480 "data_size": 63488 00:21:59.480 }, 00:21:59.480 { 00:21:59.480 "name": "BaseBdev3", 00:21:59.480 "uuid": "b30d1f19-3c89-5699-93a8-5c56a9840aba", 00:21:59.480 "is_configured": true, 00:21:59.480 "data_offset": 2048, 00:21:59.480 "data_size": 63488 00:21:59.480 } 00:21:59.480 ] 00:21:59.480 }' 00:21:59.480 08:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.480 08:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.044 08:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:00.608 [2024-07-12 08:48:35.495823] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:00.608 [2024-07-12 08:48:35.496102] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:00.608 [2024-07-12 08:48:35.499312] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.608 [2024-07-12 08:48:35.499491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.608 [2024-07-12 08:48:35.499656] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.608 [2024-07-12 08:48:35.499762] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:00.608 0 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 131829 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 131829 ']' 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 131829 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131829 00:22:00.608 killing process with pid 131829 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131829' 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 131829 00:22:00.608 08:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 131829 00:22:00.608 [2024-07-12 08:48:35.527085] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.608 [2024-07-12 08:48:35.722715] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.j0NosrnqMu 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:01.981 ************************************ 00:22:01.981 END TEST raid_read_error_test 00:22:01.981 ************************************ 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.40 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.40 != \0\.\0\0 ]] 00:22:01.981 00:22:01.981 real 0m9.081s 00:22:01.981 user 0m13.920s 00:22:01.981 sys 0m1.084s 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.981 08:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.240 08:48:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:02.240 08:48:37 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:22:02.240 08:48:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:02.240 08:48:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.240 08:48:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:02.240 ************************************ 00:22:02.240 START TEST raid_write_error_test 00:22:02.240 ************************************ 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.NeLpyxBIal 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132045 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132045 /var/tmp/spdk-raid.sock 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 132045 ']' 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.240 08:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:02.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:02.241 08:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.241 08:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.241 [2024-07-12 08:48:37.265448] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:22:02.241 [2024-07-12 08:48:37.265784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132045 ] 00:22:02.241 [2024-07-12 08:48:37.432962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.499 [2024-07-12 08:48:37.681725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.757 [2024-07-12 08:48:37.879009] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:03.016 08:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.016 08:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:03.016 08:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:03.016 08:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:03.582 BaseBdev1_malloc 00:22:03.582 08:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:03.582 true 00:22:03.582 08:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:03.840 [2024-07-12 08:48:38.981121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:03.840 [2024-07-12 08:48:38.981374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.840 [2024-07-12 08:48:38.981578] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:03.840 [2024-07-12 08:48:38.981696] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.840 [2024-07-12 08:48:38.984620] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.840 [2024-07-12 08:48:38.984791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:03.840 BaseBdev1 00:22:03.840 08:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:03.840 08:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:04.099 BaseBdev2_malloc 00:22:04.099 08:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:04.357 true 00:22:04.357 08:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:04.616 [2024-07-12 08:48:39.741092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:04.616 [2024-07-12 08:48:39.742529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.616 [2024-07-12 08:48:39.742614] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:04.616 [2024-07-12 08:48:39.742874] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.616 [2024-07-12 08:48:39.745551] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.616 [2024-07-12 08:48:39.745730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:04.616 BaseBdev2 00:22:04.616 08:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:04.616 08:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:04.880 BaseBdev3_malloc 00:22:04.880 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:05.154 true 00:22:05.154 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:05.427 [2024-07-12 08:48:40.486867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:05.427 [2024-07-12 08:48:40.487178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.427 [2024-07-12 08:48:40.487334] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:05.427 [2024-07-12 08:48:40.487465] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.427 [2024-07-12 08:48:40.490169] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.427 [2024-07-12 08:48:40.490338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:05.427 BaseBdev3 00:22:05.427 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:05.686 [2024-07-12 08:48:40.743311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.686 [2024-07-12 08:48:40.745785] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:05.686 [2024-07-12 08:48:40.746005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:05.686 [2024-07-12 08:48:40.746388] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:05.686 [2024-07-12 08:48:40.746512] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:05.686 [2024-07-12 08:48:40.746702] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:05.686 [2024-07-12 08:48:40.747265] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:05.686 [2024-07-12 08:48:40.747387] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:05.686 [2024-07-12 08:48:40.747712] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.686 08:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.945 08:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.945 "name": "raid_bdev1", 00:22:05.945 "uuid": "f2c2c728-c72a-4fa1-bbda-54c1af349244", 00:22:05.945 "strip_size_kb": 64, 00:22:05.945 "state": "online", 00:22:05.945 "raid_level": "concat", 00:22:05.945 "superblock": true, 00:22:05.945 "num_base_bdevs": 3, 00:22:05.945 "num_base_bdevs_discovered": 3, 00:22:05.945 "num_base_bdevs_operational": 3, 00:22:05.945 "base_bdevs_list": [ 00:22:05.945 { 00:22:05.945 "name": "BaseBdev1", 00:22:05.945 "uuid": "de13b6c2-d507-5697-9d3d-50ee2fe73497", 00:22:05.945 "is_configured": true, 00:22:05.945 "data_offset": 2048, 00:22:05.945 "data_size": 63488 00:22:05.945 }, 00:22:05.945 { 00:22:05.945 "name": "BaseBdev2", 00:22:05.945 "uuid": "57e6e9bb-77b4-5a43-a5ab-4dd9bab17fef", 00:22:05.945 "is_configured": true, 00:22:05.945 "data_offset": 2048, 00:22:05.945 "data_size": 63488 00:22:05.945 }, 00:22:05.945 { 00:22:05.945 "name": "BaseBdev3", 00:22:05.945 "uuid": "e1a82f02-9510-5330-9d2b-7671f343c531", 00:22:05.945 "is_configured": true, 00:22:05.945 "data_offset": 2048, 00:22:05.945 "data_size": 63488 00:22:05.945 } 00:22:05.945 ] 00:22:05.945 }' 00:22:05.945 08:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.945 08:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.510 08:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:06.510 08:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:06.768 [2024-07-12 08:48:41.781594] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:07.704 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.964 08:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.221 08:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.221 "name": "raid_bdev1", 00:22:08.221 "uuid": "f2c2c728-c72a-4fa1-bbda-54c1af349244", 00:22:08.221 "strip_size_kb": 64, 00:22:08.221 "state": "online", 00:22:08.221 "raid_level": "concat", 00:22:08.221 "superblock": true, 00:22:08.221 "num_base_bdevs": 3, 00:22:08.221 "num_base_bdevs_discovered": 3, 00:22:08.221 "num_base_bdevs_operational": 3, 00:22:08.221 "base_bdevs_list": [ 00:22:08.221 { 00:22:08.221 "name": "BaseBdev1", 00:22:08.221 "uuid": "de13b6c2-d507-5697-9d3d-50ee2fe73497", 00:22:08.221 "is_configured": true, 00:22:08.221 "data_offset": 2048, 00:22:08.221 "data_size": 63488 00:22:08.221 }, 00:22:08.221 { 00:22:08.221 "name": "BaseBdev2", 00:22:08.221 "uuid": "57e6e9bb-77b4-5a43-a5ab-4dd9bab17fef", 00:22:08.221 "is_configured": true, 00:22:08.221 "data_offset": 2048, 00:22:08.221 "data_size": 63488 00:22:08.221 }, 00:22:08.221 { 00:22:08.221 "name": "BaseBdev3", 00:22:08.221 "uuid": "e1a82f02-9510-5330-9d2b-7671f343c531", 00:22:08.221 "is_configured": true, 00:22:08.221 "data_offset": 2048, 00:22:08.221 "data_size": 63488 00:22:08.221 } 00:22:08.221 ] 00:22:08.221 }' 00:22:08.221 08:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.221 08:48:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.840 08:48:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:09.098 [2024-07-12 08:48:44.104368] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:09.098 [2024-07-12 08:48:44.104702] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.098 [2024-07-12 08:48:44.107739] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.098 [2024-07-12 08:48:44.107949] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.098 [2024-07-12 08:48:44.108032] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.098 [2024-07-12 08:48:44.108345] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:09.098 0 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132045 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 132045 ']' 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 132045 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132045 00:22:09.098 killing process with pid 132045 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132045' 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 132045 00:22:09.098 08:48:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 132045 00:22:09.098 [2024-07-12 08:48:44.135984] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.356 [2024-07-12 08:48:44.331416] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.NeLpyxBIal 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:10.732 ************************************ 00:22:10.732 END TEST raid_write_error_test 00:22:10.732 ************************************ 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:22:10.732 00:22:10.732 real 0m8.312s 00:22:10.732 user 0m12.834s 00:22:10.732 sys 0m0.895s 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:10.732 08:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.732 08:48:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:10.732 08:48:45 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:10.733 08:48:45 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:22:10.733 08:48:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:10.733 08:48:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.733 08:48:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.733 ************************************ 00:22:10.733 START TEST raid_state_function_test 00:22:10.733 ************************************ 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=132268 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132268' 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:10.733 Process raid pid: 132268 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 132268 /var/tmp/spdk-raid.sock 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 132268 ']' 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:10.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.733 08:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.733 [2024-07-12 08:48:45.666249] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:22:10.733 [2024-07-12 08:48:45.666832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.733 [2024-07-12 08:48:45.864031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.991 [2024-07-12 08:48:46.083948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.251 [2024-07-12 08:48:46.290352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:11.509 08:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.509 08:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:22:11.509 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:11.767 [2024-07-12 08:48:46.942563] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:11.767 [2024-07-12 08:48:46.942912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:11.767 [2024-07-12 08:48:46.943028] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:11.767 [2024-07-12 08:48:46.943098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:11.767 [2024-07-12 08:48:46.943201] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:11.767 [2024-07-12 08:48:46.943257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.767 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.025 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.025 08:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.025 08:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.025 "name": "Existed_Raid", 00:22:12.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.025 "strip_size_kb": 0, 00:22:12.025 "state": "configuring", 00:22:12.025 "raid_level": "raid1", 00:22:12.025 "superblock": false, 00:22:12.025 "num_base_bdevs": 3, 00:22:12.025 "num_base_bdevs_discovered": 0, 00:22:12.025 "num_base_bdevs_operational": 3, 00:22:12.025 "base_bdevs_list": [ 00:22:12.025 { 00:22:12.025 "name": "BaseBdev1", 00:22:12.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.025 "is_configured": false, 00:22:12.025 "data_offset": 0, 00:22:12.025 "data_size": 0 00:22:12.025 }, 00:22:12.025 { 00:22:12.025 "name": "BaseBdev2", 00:22:12.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.025 "is_configured": false, 00:22:12.025 "data_offset": 0, 00:22:12.025 "data_size": 0 00:22:12.025 }, 00:22:12.025 { 00:22:12.025 "name": "BaseBdev3", 00:22:12.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.025 "is_configured": false, 00:22:12.025 "data_offset": 0, 00:22:12.025 "data_size": 0 00:22:12.025 } 00:22:12.025 ] 00:22:12.025 }' 00:22:12.025 08:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.025 08:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.959 08:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:13.217 [2024-07-12 08:48:48.158695] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:13.217 [2024-07-12 08:48:48.158914] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:13.217 08:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:13.476 [2024-07-12 08:48:48.426757] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:13.476 [2024-07-12 08:48:48.426971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:13.476 [2024-07-12 08:48:48.427088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:13.476 [2024-07-12 08:48:48.427153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:13.476 [2024-07-12 08:48:48.427273] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:13.476 [2024-07-12 08:48:48.427336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:13.476 08:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:13.733 [2024-07-12 08:48:48.694586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:13.733 BaseBdev1 00:22:13.733 08:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:13.733 08:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:13.733 08:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:13.733 08:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:13.733 08:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:13.733 08:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:13.733 08:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:13.991 08:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:13.991 [ 00:22:13.991 { 00:22:13.991 "name": "BaseBdev1", 00:22:13.991 "aliases": [ 00:22:13.991 "4bb917e6-1183-4adc-8aae-80e55922f7d7" 00:22:13.991 ], 00:22:13.991 "product_name": "Malloc disk", 00:22:13.991 "block_size": 512, 00:22:13.991 "num_blocks": 65536, 00:22:13.991 "uuid": "4bb917e6-1183-4adc-8aae-80e55922f7d7", 00:22:13.991 "assigned_rate_limits": { 00:22:13.991 "rw_ios_per_sec": 0, 00:22:13.991 "rw_mbytes_per_sec": 0, 00:22:13.991 "r_mbytes_per_sec": 0, 00:22:13.991 "w_mbytes_per_sec": 0 00:22:13.991 }, 00:22:13.991 "claimed": true, 00:22:13.991 "claim_type": "exclusive_write", 00:22:13.991 "zoned": false, 00:22:13.991 "supported_io_types": { 00:22:13.991 "read": true, 00:22:13.991 "write": true, 00:22:13.991 "unmap": true, 00:22:13.991 "flush": true, 00:22:13.991 "reset": true, 00:22:13.991 "nvme_admin": false, 00:22:13.991 "nvme_io": false, 00:22:13.991 "nvme_io_md": false, 00:22:13.991 "write_zeroes": true, 00:22:13.991 "zcopy": true, 00:22:13.991 "get_zone_info": false, 00:22:13.991 "zone_management": false, 00:22:13.991 "zone_append": false, 00:22:13.991 "compare": false, 00:22:13.991 "compare_and_write": false, 00:22:13.991 "abort": true, 00:22:13.991 "seek_hole": false, 00:22:13.991 "seek_data": false, 00:22:13.991 "copy": true, 00:22:13.991 "nvme_iov_md": false 00:22:13.991 }, 00:22:13.991 "memory_domains": [ 00:22:13.991 { 00:22:13.991 "dma_device_id": "system", 00:22:13.991 "dma_device_type": 1 00:22:13.991 }, 00:22:13.991 { 00:22:13.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.991 "dma_device_type": 2 00:22:13.991 } 00:22:13.991 ], 00:22:13.991 "driver_specific": {} 00:22:13.991 } 00:22:13.991 ] 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.991 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.557 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.557 "name": "Existed_Raid", 00:22:14.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.557 "strip_size_kb": 0, 00:22:14.557 "state": "configuring", 00:22:14.557 "raid_level": "raid1", 00:22:14.557 "superblock": false, 00:22:14.557 "num_base_bdevs": 3, 00:22:14.557 "num_base_bdevs_discovered": 1, 00:22:14.557 "num_base_bdevs_operational": 3, 00:22:14.557 "base_bdevs_list": [ 00:22:14.557 { 00:22:14.557 "name": "BaseBdev1", 00:22:14.557 "uuid": "4bb917e6-1183-4adc-8aae-80e55922f7d7", 00:22:14.557 "is_configured": true, 00:22:14.557 "data_offset": 0, 00:22:14.557 "data_size": 65536 00:22:14.557 }, 00:22:14.557 { 00:22:14.557 "name": "BaseBdev2", 00:22:14.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.557 "is_configured": false, 00:22:14.557 "data_offset": 0, 00:22:14.557 "data_size": 0 00:22:14.557 }, 00:22:14.557 { 00:22:14.557 "name": "BaseBdev3", 00:22:14.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.557 "is_configured": false, 00:22:14.557 "data_offset": 0, 00:22:14.557 "data_size": 0 00:22:14.557 } 00:22:14.557 ] 00:22:14.557 }' 00:22:14.557 08:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.557 08:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.123 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:15.381 [2024-07-12 08:48:50.395022] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.381 [2024-07-12 08:48:50.395280] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:22:15.381 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:15.640 [2024-07-12 08:48:50.635092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:15.640 [2024-07-12 08:48:50.637395] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.640 [2024-07-12 08:48:50.637593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.640 [2024-07-12 08:48:50.637712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:15.640 [2024-07-12 08:48:50.637798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.640 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.898 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.898 "name": "Existed_Raid", 00:22:15.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.898 "strip_size_kb": 0, 00:22:15.898 "state": "configuring", 00:22:15.898 "raid_level": "raid1", 00:22:15.898 "superblock": false, 00:22:15.898 "num_base_bdevs": 3, 00:22:15.898 "num_base_bdevs_discovered": 1, 00:22:15.898 "num_base_bdevs_operational": 3, 00:22:15.898 "base_bdevs_list": [ 00:22:15.898 { 00:22:15.898 "name": "BaseBdev1", 00:22:15.898 "uuid": "4bb917e6-1183-4adc-8aae-80e55922f7d7", 00:22:15.898 "is_configured": true, 00:22:15.898 "data_offset": 0, 00:22:15.898 "data_size": 65536 00:22:15.898 }, 00:22:15.898 { 00:22:15.898 "name": "BaseBdev2", 00:22:15.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.898 "is_configured": false, 00:22:15.898 "data_offset": 0, 00:22:15.898 "data_size": 0 00:22:15.898 }, 00:22:15.898 { 00:22:15.898 "name": "BaseBdev3", 00:22:15.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.898 "is_configured": false, 00:22:15.898 "data_offset": 0, 00:22:15.898 "data_size": 0 00:22:15.898 } 00:22:15.898 ] 00:22:15.898 }' 00:22:15.898 08:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.898 08:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.465 08:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:16.723 [2024-07-12 08:48:51.915969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:16.723 BaseBdev2 00:22:16.981 08:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:16.981 08:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:16.981 08:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:16.981 08:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:16.981 08:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:16.981 08:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:16.981 08:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:17.239 08:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:17.498 [ 00:22:17.498 { 00:22:17.498 "name": "BaseBdev2", 00:22:17.498 "aliases": [ 00:22:17.498 "ed5e1d23-b3cd-4578-9e34-060570746324" 00:22:17.498 ], 00:22:17.498 "product_name": "Malloc disk", 00:22:17.498 "block_size": 512, 00:22:17.498 "num_blocks": 65536, 00:22:17.498 "uuid": "ed5e1d23-b3cd-4578-9e34-060570746324", 00:22:17.498 "assigned_rate_limits": { 00:22:17.498 "rw_ios_per_sec": 0, 00:22:17.498 "rw_mbytes_per_sec": 0, 00:22:17.498 "r_mbytes_per_sec": 0, 00:22:17.498 "w_mbytes_per_sec": 0 00:22:17.498 }, 00:22:17.498 "claimed": true, 00:22:17.498 "claim_type": "exclusive_write", 00:22:17.498 "zoned": false, 00:22:17.498 "supported_io_types": { 00:22:17.498 "read": true, 00:22:17.498 "write": true, 00:22:17.498 "unmap": true, 00:22:17.498 "flush": true, 00:22:17.498 "reset": true, 00:22:17.498 "nvme_admin": false, 00:22:17.498 "nvme_io": false, 00:22:17.498 "nvme_io_md": false, 00:22:17.498 "write_zeroes": true, 00:22:17.498 "zcopy": true, 00:22:17.498 "get_zone_info": false, 00:22:17.498 "zone_management": false, 00:22:17.498 "zone_append": false, 00:22:17.498 "compare": false, 00:22:17.498 "compare_and_write": false, 00:22:17.498 "abort": true, 00:22:17.498 "seek_hole": false, 00:22:17.498 "seek_data": false, 00:22:17.498 "copy": true, 00:22:17.498 "nvme_iov_md": false 00:22:17.498 }, 00:22:17.498 "memory_domains": [ 00:22:17.498 { 00:22:17.498 "dma_device_id": "system", 00:22:17.498 "dma_device_type": 1 00:22:17.498 }, 00:22:17.498 { 00:22:17.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.498 "dma_device_type": 2 00:22:17.498 } 00:22:17.498 ], 00:22:17.498 "driver_specific": {} 00:22:17.498 } 00:22:17.498 ] 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.498 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.756 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.756 "name": "Existed_Raid", 00:22:17.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.756 "strip_size_kb": 0, 00:22:17.756 "state": "configuring", 00:22:17.756 "raid_level": "raid1", 00:22:17.756 "superblock": false, 00:22:17.756 "num_base_bdevs": 3, 00:22:17.756 "num_base_bdevs_discovered": 2, 00:22:17.756 "num_base_bdevs_operational": 3, 00:22:17.756 "base_bdevs_list": [ 00:22:17.756 { 00:22:17.756 "name": "BaseBdev1", 00:22:17.756 "uuid": "4bb917e6-1183-4adc-8aae-80e55922f7d7", 00:22:17.756 "is_configured": true, 00:22:17.756 "data_offset": 0, 00:22:17.756 "data_size": 65536 00:22:17.756 }, 00:22:17.756 { 00:22:17.756 "name": "BaseBdev2", 00:22:17.756 "uuid": "ed5e1d23-b3cd-4578-9e34-060570746324", 00:22:17.756 "is_configured": true, 00:22:17.756 "data_offset": 0, 00:22:17.756 "data_size": 65536 00:22:17.756 }, 00:22:17.756 { 00:22:17.756 "name": "BaseBdev3", 00:22:17.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.756 "is_configured": false, 00:22:17.756 "data_offset": 0, 00:22:17.756 "data_size": 0 00:22:17.756 } 00:22:17.756 ] 00:22:17.756 }' 00:22:17.756 08:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.756 08:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.322 08:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:18.888 [2024-07-12 08:48:53.777422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:18.888 [2024-07-12 08:48:53.777711] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:22:18.888 [2024-07-12 08:48:53.777754] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:18.888 [2024-07-12 08:48:53.778016] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:18.888 [2024-07-12 08:48:53.778543] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:22:18.888 [2024-07-12 08:48:53.778660] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:22:18.888 [2024-07-12 08:48:53.779046] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.888 BaseBdev3 00:22:18.888 08:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:18.888 08:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:18.888 08:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:18.888 08:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:18.888 08:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:18.888 08:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:18.888 08:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:18.888 08:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:19.146 [ 00:22:19.146 { 00:22:19.146 "name": "BaseBdev3", 00:22:19.146 "aliases": [ 00:22:19.146 "6d8c36df-cdb2-4acd-935e-d475ac265897" 00:22:19.146 ], 00:22:19.146 "product_name": "Malloc disk", 00:22:19.146 "block_size": 512, 00:22:19.146 "num_blocks": 65536, 00:22:19.146 "uuid": "6d8c36df-cdb2-4acd-935e-d475ac265897", 00:22:19.146 "assigned_rate_limits": { 00:22:19.146 "rw_ios_per_sec": 0, 00:22:19.146 "rw_mbytes_per_sec": 0, 00:22:19.146 "r_mbytes_per_sec": 0, 00:22:19.146 "w_mbytes_per_sec": 0 00:22:19.146 }, 00:22:19.146 "claimed": true, 00:22:19.146 "claim_type": "exclusive_write", 00:22:19.146 "zoned": false, 00:22:19.146 "supported_io_types": { 00:22:19.146 "read": true, 00:22:19.146 "write": true, 00:22:19.146 "unmap": true, 00:22:19.146 "flush": true, 00:22:19.146 "reset": true, 00:22:19.146 "nvme_admin": false, 00:22:19.146 "nvme_io": false, 00:22:19.146 "nvme_io_md": false, 00:22:19.146 "write_zeroes": true, 00:22:19.146 "zcopy": true, 00:22:19.146 "get_zone_info": false, 00:22:19.146 "zone_management": false, 00:22:19.146 "zone_append": false, 00:22:19.146 "compare": false, 00:22:19.146 "compare_and_write": false, 00:22:19.146 "abort": true, 00:22:19.146 "seek_hole": false, 00:22:19.146 "seek_data": false, 00:22:19.146 "copy": true, 00:22:19.146 "nvme_iov_md": false 00:22:19.146 }, 00:22:19.146 "memory_domains": [ 00:22:19.146 { 00:22:19.146 "dma_device_id": "system", 00:22:19.146 "dma_device_type": 1 00:22:19.146 }, 00:22:19.146 { 00:22:19.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.146 "dma_device_type": 2 00:22:19.146 } 00:22:19.146 ], 00:22:19.146 "driver_specific": {} 00:22:19.146 } 00:22:19.146 ] 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.405 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.664 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.664 "name": "Existed_Raid", 00:22:19.664 "uuid": "005d3d5c-7cbe-41aa-80c0-fe44bee6c055", 00:22:19.664 "strip_size_kb": 0, 00:22:19.664 "state": "online", 00:22:19.664 "raid_level": "raid1", 00:22:19.664 "superblock": false, 00:22:19.664 "num_base_bdevs": 3, 00:22:19.664 "num_base_bdevs_discovered": 3, 00:22:19.664 "num_base_bdevs_operational": 3, 00:22:19.664 "base_bdevs_list": [ 00:22:19.664 { 00:22:19.664 "name": "BaseBdev1", 00:22:19.664 "uuid": "4bb917e6-1183-4adc-8aae-80e55922f7d7", 00:22:19.664 "is_configured": true, 00:22:19.664 "data_offset": 0, 00:22:19.664 "data_size": 65536 00:22:19.664 }, 00:22:19.664 { 00:22:19.664 "name": "BaseBdev2", 00:22:19.664 "uuid": "ed5e1d23-b3cd-4578-9e34-060570746324", 00:22:19.664 "is_configured": true, 00:22:19.664 "data_offset": 0, 00:22:19.664 "data_size": 65536 00:22:19.664 }, 00:22:19.664 { 00:22:19.664 "name": "BaseBdev3", 00:22:19.664 "uuid": "6d8c36df-cdb2-4acd-935e-d475ac265897", 00:22:19.664 "is_configured": true, 00:22:19.664 "data_offset": 0, 00:22:19.664 "data_size": 65536 00:22:19.664 } 00:22:19.664 ] 00:22:19.664 }' 00:22:19.664 08:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.664 08:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:20.231 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:20.489 [2024-07-12 08:48:55.562265] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.490 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:20.490 "name": "Existed_Raid", 00:22:20.490 "aliases": [ 00:22:20.490 "005d3d5c-7cbe-41aa-80c0-fe44bee6c055" 00:22:20.490 ], 00:22:20.490 "product_name": "Raid Volume", 00:22:20.490 "block_size": 512, 00:22:20.490 "num_blocks": 65536, 00:22:20.490 "uuid": "005d3d5c-7cbe-41aa-80c0-fe44bee6c055", 00:22:20.490 "assigned_rate_limits": { 00:22:20.490 "rw_ios_per_sec": 0, 00:22:20.490 "rw_mbytes_per_sec": 0, 00:22:20.490 "r_mbytes_per_sec": 0, 00:22:20.490 "w_mbytes_per_sec": 0 00:22:20.490 }, 00:22:20.490 "claimed": false, 00:22:20.490 "zoned": false, 00:22:20.490 "supported_io_types": { 00:22:20.490 "read": true, 00:22:20.490 "write": true, 00:22:20.490 "unmap": false, 00:22:20.490 "flush": false, 00:22:20.490 "reset": true, 00:22:20.490 "nvme_admin": false, 00:22:20.490 "nvme_io": false, 00:22:20.490 "nvme_io_md": false, 00:22:20.490 "write_zeroes": true, 00:22:20.490 "zcopy": false, 00:22:20.490 "get_zone_info": false, 00:22:20.490 "zone_management": false, 00:22:20.490 "zone_append": false, 00:22:20.490 "compare": false, 00:22:20.490 "compare_and_write": false, 00:22:20.490 "abort": false, 00:22:20.490 "seek_hole": false, 00:22:20.490 "seek_data": false, 00:22:20.490 "copy": false, 00:22:20.490 "nvme_iov_md": false 00:22:20.490 }, 00:22:20.490 "memory_domains": [ 00:22:20.490 { 00:22:20.490 "dma_device_id": "system", 00:22:20.490 "dma_device_type": 1 00:22:20.490 }, 00:22:20.490 { 00:22:20.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.490 "dma_device_type": 2 00:22:20.490 }, 00:22:20.490 { 00:22:20.490 "dma_device_id": "system", 00:22:20.490 "dma_device_type": 1 00:22:20.490 }, 00:22:20.490 { 00:22:20.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.490 "dma_device_type": 2 00:22:20.490 }, 00:22:20.490 { 00:22:20.490 "dma_device_id": "system", 00:22:20.490 "dma_device_type": 1 00:22:20.490 }, 00:22:20.490 { 00:22:20.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.490 "dma_device_type": 2 00:22:20.490 } 00:22:20.490 ], 00:22:20.490 "driver_specific": { 00:22:20.490 "raid": { 00:22:20.490 "uuid": "005d3d5c-7cbe-41aa-80c0-fe44bee6c055", 00:22:20.490 "strip_size_kb": 0, 00:22:20.490 "state": "online", 00:22:20.490 "raid_level": "raid1", 00:22:20.490 "superblock": false, 00:22:20.490 "num_base_bdevs": 3, 00:22:20.490 "num_base_bdevs_discovered": 3, 00:22:20.490 "num_base_bdevs_operational": 3, 00:22:20.490 "base_bdevs_list": [ 00:22:20.490 { 00:22:20.490 "name": "BaseBdev1", 00:22:20.490 "uuid": "4bb917e6-1183-4adc-8aae-80e55922f7d7", 00:22:20.490 "is_configured": true, 00:22:20.490 "data_offset": 0, 00:22:20.490 "data_size": 65536 00:22:20.490 }, 00:22:20.490 { 00:22:20.490 "name": "BaseBdev2", 00:22:20.490 "uuid": "ed5e1d23-b3cd-4578-9e34-060570746324", 00:22:20.490 "is_configured": true, 00:22:20.490 "data_offset": 0, 00:22:20.490 "data_size": 65536 00:22:20.490 }, 00:22:20.490 { 00:22:20.490 "name": "BaseBdev3", 00:22:20.490 "uuid": "6d8c36df-cdb2-4acd-935e-d475ac265897", 00:22:20.490 "is_configured": true, 00:22:20.490 "data_offset": 0, 00:22:20.490 "data_size": 65536 00:22:20.490 } 00:22:20.490 ] 00:22:20.490 } 00:22:20.490 } 00:22:20.490 }' 00:22:20.490 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:20.490 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:20.490 BaseBdev2 00:22:20.490 BaseBdev3' 00:22:20.490 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.490 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:20.490 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.749 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.749 "name": "BaseBdev1", 00:22:20.749 "aliases": [ 00:22:20.749 "4bb917e6-1183-4adc-8aae-80e55922f7d7" 00:22:20.749 ], 00:22:20.749 "product_name": "Malloc disk", 00:22:20.749 "block_size": 512, 00:22:20.749 "num_blocks": 65536, 00:22:20.749 "uuid": "4bb917e6-1183-4adc-8aae-80e55922f7d7", 00:22:20.749 "assigned_rate_limits": { 00:22:20.749 "rw_ios_per_sec": 0, 00:22:20.749 "rw_mbytes_per_sec": 0, 00:22:20.749 "r_mbytes_per_sec": 0, 00:22:20.749 "w_mbytes_per_sec": 0 00:22:20.749 }, 00:22:20.749 "claimed": true, 00:22:20.749 "claim_type": "exclusive_write", 00:22:20.749 "zoned": false, 00:22:20.749 "supported_io_types": { 00:22:20.749 "read": true, 00:22:20.749 "write": true, 00:22:20.749 "unmap": true, 00:22:20.749 "flush": true, 00:22:20.749 "reset": true, 00:22:20.749 "nvme_admin": false, 00:22:20.749 "nvme_io": false, 00:22:20.749 "nvme_io_md": false, 00:22:20.749 "write_zeroes": true, 00:22:20.749 "zcopy": true, 00:22:20.749 "get_zone_info": false, 00:22:20.749 "zone_management": false, 00:22:20.749 "zone_append": false, 00:22:20.749 "compare": false, 00:22:20.749 "compare_and_write": false, 00:22:20.749 "abort": true, 00:22:20.749 "seek_hole": false, 00:22:20.749 "seek_data": false, 00:22:20.749 "copy": true, 00:22:20.749 "nvme_iov_md": false 00:22:20.749 }, 00:22:20.749 "memory_domains": [ 00:22:20.749 { 00:22:20.749 "dma_device_id": "system", 00:22:20.749 "dma_device_type": 1 00:22:20.749 }, 00:22:20.749 { 00:22:20.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.749 "dma_device_type": 2 00:22:20.750 } 00:22:20.750 ], 00:22:20.750 "driver_specific": {} 00:22:20.750 }' 00:22:20.750 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.008 08:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.008 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.008 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.008 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.008 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.008 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.008 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.266 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.266 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.266 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.266 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.266 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.267 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:21.267 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.525 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.525 "name": "BaseBdev2", 00:22:21.525 "aliases": [ 00:22:21.525 "ed5e1d23-b3cd-4578-9e34-060570746324" 00:22:21.525 ], 00:22:21.525 "product_name": "Malloc disk", 00:22:21.525 "block_size": 512, 00:22:21.525 "num_blocks": 65536, 00:22:21.525 "uuid": "ed5e1d23-b3cd-4578-9e34-060570746324", 00:22:21.525 "assigned_rate_limits": { 00:22:21.525 "rw_ios_per_sec": 0, 00:22:21.525 "rw_mbytes_per_sec": 0, 00:22:21.525 "r_mbytes_per_sec": 0, 00:22:21.525 "w_mbytes_per_sec": 0 00:22:21.525 }, 00:22:21.525 "claimed": true, 00:22:21.525 "claim_type": "exclusive_write", 00:22:21.525 "zoned": false, 00:22:21.525 "supported_io_types": { 00:22:21.525 "read": true, 00:22:21.525 "write": true, 00:22:21.525 "unmap": true, 00:22:21.525 "flush": true, 00:22:21.525 "reset": true, 00:22:21.525 "nvme_admin": false, 00:22:21.525 "nvme_io": false, 00:22:21.525 "nvme_io_md": false, 00:22:21.525 "write_zeroes": true, 00:22:21.525 "zcopy": true, 00:22:21.525 "get_zone_info": false, 00:22:21.525 "zone_management": false, 00:22:21.525 "zone_append": false, 00:22:21.525 "compare": false, 00:22:21.525 "compare_and_write": false, 00:22:21.525 "abort": true, 00:22:21.525 "seek_hole": false, 00:22:21.525 "seek_data": false, 00:22:21.525 "copy": true, 00:22:21.525 "nvme_iov_md": false 00:22:21.525 }, 00:22:21.525 "memory_domains": [ 00:22:21.525 { 00:22:21.525 "dma_device_id": "system", 00:22:21.525 "dma_device_type": 1 00:22:21.525 }, 00:22:21.525 { 00:22:21.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.525 "dma_device_type": 2 00:22:21.525 } 00:22:21.525 ], 00:22:21.525 "driver_specific": {} 00:22:21.525 }' 00:22:21.525 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.525 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.783 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.783 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.783 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.784 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.784 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.784 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.784 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.784 08:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.042 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.042 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:22.042 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:22.042 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:22.042 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:22.300 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:22.300 "name": "BaseBdev3", 00:22:22.300 "aliases": [ 00:22:22.300 "6d8c36df-cdb2-4acd-935e-d475ac265897" 00:22:22.300 ], 00:22:22.300 "product_name": "Malloc disk", 00:22:22.300 "block_size": 512, 00:22:22.300 "num_blocks": 65536, 00:22:22.300 "uuid": "6d8c36df-cdb2-4acd-935e-d475ac265897", 00:22:22.300 "assigned_rate_limits": { 00:22:22.300 "rw_ios_per_sec": 0, 00:22:22.300 "rw_mbytes_per_sec": 0, 00:22:22.300 "r_mbytes_per_sec": 0, 00:22:22.300 "w_mbytes_per_sec": 0 00:22:22.300 }, 00:22:22.300 "claimed": true, 00:22:22.300 "claim_type": "exclusive_write", 00:22:22.300 "zoned": false, 00:22:22.300 "supported_io_types": { 00:22:22.300 "read": true, 00:22:22.300 "write": true, 00:22:22.300 "unmap": true, 00:22:22.300 "flush": true, 00:22:22.300 "reset": true, 00:22:22.300 "nvme_admin": false, 00:22:22.300 "nvme_io": false, 00:22:22.300 "nvme_io_md": false, 00:22:22.300 "write_zeroes": true, 00:22:22.300 "zcopy": true, 00:22:22.300 "get_zone_info": false, 00:22:22.300 "zone_management": false, 00:22:22.300 "zone_append": false, 00:22:22.300 "compare": false, 00:22:22.300 "compare_and_write": false, 00:22:22.300 "abort": true, 00:22:22.300 "seek_hole": false, 00:22:22.300 "seek_data": false, 00:22:22.300 "copy": true, 00:22:22.300 "nvme_iov_md": false 00:22:22.300 }, 00:22:22.300 "memory_domains": [ 00:22:22.300 { 00:22:22.300 "dma_device_id": "system", 00:22:22.300 "dma_device_type": 1 00:22:22.300 }, 00:22:22.300 { 00:22:22.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.300 "dma_device_type": 2 00:22:22.300 } 00:22:22.300 ], 00:22:22.300 "driver_specific": {} 00:22:22.300 }' 00:22:22.300 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.300 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.300 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:22.300 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.558 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.558 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:22.558 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.558 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.558 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:22.558 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.816 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.816 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:22.816 08:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:23.074 [2024-07-12 08:48:58.070633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.074 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.332 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.332 "name": "Existed_Raid", 00:22:23.332 "uuid": "005d3d5c-7cbe-41aa-80c0-fe44bee6c055", 00:22:23.332 "strip_size_kb": 0, 00:22:23.332 "state": "online", 00:22:23.332 "raid_level": "raid1", 00:22:23.332 "superblock": false, 00:22:23.332 "num_base_bdevs": 3, 00:22:23.332 "num_base_bdevs_discovered": 2, 00:22:23.332 "num_base_bdevs_operational": 2, 00:22:23.332 "base_bdevs_list": [ 00:22:23.332 { 00:22:23.332 "name": null, 00:22:23.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.332 "is_configured": false, 00:22:23.332 "data_offset": 0, 00:22:23.332 "data_size": 65536 00:22:23.332 }, 00:22:23.332 { 00:22:23.332 "name": "BaseBdev2", 00:22:23.332 "uuid": "ed5e1d23-b3cd-4578-9e34-060570746324", 00:22:23.332 "is_configured": true, 00:22:23.332 "data_offset": 0, 00:22:23.332 "data_size": 65536 00:22:23.332 }, 00:22:23.332 { 00:22:23.332 "name": "BaseBdev3", 00:22:23.332 "uuid": "6d8c36df-cdb2-4acd-935e-d475ac265897", 00:22:23.332 "is_configured": true, 00:22:23.332 "data_offset": 0, 00:22:23.332 "data_size": 65536 00:22:23.332 } 00:22:23.332 ] 00:22:23.332 }' 00:22:23.332 08:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.332 08:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.302 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:24.302 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:24.302 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.302 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:24.302 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:24.302 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:24.302 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:24.560 [2024-07-12 08:48:59.648445] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:24.560 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:24.560 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:24.560 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:24.560 08:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.819 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:24.819 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:24.819 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:25.077 [2024-07-12 08:49:00.210416] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:25.077 [2024-07-12 08:49:00.210724] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:25.336 [2024-07-12 08:49:00.292459] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.336 [2024-07-12 08:49:00.292633] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.336 [2024-07-12 08:49:00.292732] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:22:25.336 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:25.336 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:25.336 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.336 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:25.594 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:25.594 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:25.594 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:25.594 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:25.594 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:25.594 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:25.853 BaseBdev2 00:22:25.853 08:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:25.853 08:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:25.853 08:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:25.853 08:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:25.853 08:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:25.853 08:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:25.853 08:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:26.110 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:26.110 [ 00:22:26.110 { 00:22:26.110 "name": "BaseBdev2", 00:22:26.110 "aliases": [ 00:22:26.110 "b5782785-99a8-4a94-a34a-d94b482f66ab" 00:22:26.110 ], 00:22:26.110 "product_name": "Malloc disk", 00:22:26.110 "block_size": 512, 00:22:26.110 "num_blocks": 65536, 00:22:26.110 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:26.110 "assigned_rate_limits": { 00:22:26.110 "rw_ios_per_sec": 0, 00:22:26.110 "rw_mbytes_per_sec": 0, 00:22:26.110 "r_mbytes_per_sec": 0, 00:22:26.110 "w_mbytes_per_sec": 0 00:22:26.110 }, 00:22:26.110 "claimed": false, 00:22:26.110 "zoned": false, 00:22:26.110 "supported_io_types": { 00:22:26.110 "read": true, 00:22:26.110 "write": true, 00:22:26.110 "unmap": true, 00:22:26.110 "flush": true, 00:22:26.110 "reset": true, 00:22:26.110 "nvme_admin": false, 00:22:26.110 "nvme_io": false, 00:22:26.110 "nvme_io_md": false, 00:22:26.110 "write_zeroes": true, 00:22:26.110 "zcopy": true, 00:22:26.110 "get_zone_info": false, 00:22:26.110 "zone_management": false, 00:22:26.110 "zone_append": false, 00:22:26.110 "compare": false, 00:22:26.110 "compare_and_write": false, 00:22:26.110 "abort": true, 00:22:26.110 "seek_hole": false, 00:22:26.110 "seek_data": false, 00:22:26.110 "copy": true, 00:22:26.110 "nvme_iov_md": false 00:22:26.110 }, 00:22:26.111 "memory_domains": [ 00:22:26.111 { 00:22:26.111 "dma_device_id": "system", 00:22:26.111 "dma_device_type": 1 00:22:26.111 }, 00:22:26.111 { 00:22:26.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.111 "dma_device_type": 2 00:22:26.111 } 00:22:26.111 ], 00:22:26.111 "driver_specific": {} 00:22:26.111 } 00:22:26.111 ] 00:22:26.368 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:26.368 08:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:26.368 08:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:26.368 08:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:26.368 BaseBdev3 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:26.626 08:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:26.884 [ 00:22:26.884 { 00:22:26.884 "name": "BaseBdev3", 00:22:26.884 "aliases": [ 00:22:26.884 "3c9a11b2-34a5-4597-a1cd-171e4fd67554" 00:22:26.884 ], 00:22:26.884 "product_name": "Malloc disk", 00:22:26.884 "block_size": 512, 00:22:26.884 "num_blocks": 65536, 00:22:26.884 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:26.884 "assigned_rate_limits": { 00:22:26.884 "rw_ios_per_sec": 0, 00:22:26.884 "rw_mbytes_per_sec": 0, 00:22:26.884 "r_mbytes_per_sec": 0, 00:22:26.884 "w_mbytes_per_sec": 0 00:22:26.884 }, 00:22:26.884 "claimed": false, 00:22:26.884 "zoned": false, 00:22:26.884 "supported_io_types": { 00:22:26.884 "read": true, 00:22:26.884 "write": true, 00:22:26.884 "unmap": true, 00:22:26.884 "flush": true, 00:22:26.884 "reset": true, 00:22:26.884 "nvme_admin": false, 00:22:26.884 "nvme_io": false, 00:22:26.884 "nvme_io_md": false, 00:22:26.884 "write_zeroes": true, 00:22:26.884 "zcopy": true, 00:22:26.884 "get_zone_info": false, 00:22:26.884 "zone_management": false, 00:22:26.884 "zone_append": false, 00:22:26.884 "compare": false, 00:22:26.884 "compare_and_write": false, 00:22:26.884 "abort": true, 00:22:26.884 "seek_hole": false, 00:22:26.884 "seek_data": false, 00:22:26.884 "copy": true, 00:22:26.884 "nvme_iov_md": false 00:22:26.884 }, 00:22:26.884 "memory_domains": [ 00:22:26.884 { 00:22:26.884 "dma_device_id": "system", 00:22:26.884 "dma_device_type": 1 00:22:26.884 }, 00:22:26.884 { 00:22:26.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.884 "dma_device_type": 2 00:22:26.884 } 00:22:26.884 ], 00:22:26.884 "driver_specific": {} 00:22:26.884 } 00:22:26.884 ] 00:22:26.884 08:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:26.884 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:26.884 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:26.884 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:27.143 [2024-07-12 08:49:02.307848] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:27.143 [2024-07-12 08:49:02.308090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:27.143 [2024-07-12 08:49:02.308219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:27.143 [2024-07-12 08:49:02.310336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.143 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.400 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.400 "name": "Existed_Raid", 00:22:27.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.400 "strip_size_kb": 0, 00:22:27.400 "state": "configuring", 00:22:27.400 "raid_level": "raid1", 00:22:27.400 "superblock": false, 00:22:27.400 "num_base_bdevs": 3, 00:22:27.400 "num_base_bdevs_discovered": 2, 00:22:27.400 "num_base_bdevs_operational": 3, 00:22:27.400 "base_bdevs_list": [ 00:22:27.400 { 00:22:27.400 "name": "BaseBdev1", 00:22:27.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.400 "is_configured": false, 00:22:27.400 "data_offset": 0, 00:22:27.400 "data_size": 0 00:22:27.400 }, 00:22:27.401 { 00:22:27.401 "name": "BaseBdev2", 00:22:27.401 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:27.401 "is_configured": true, 00:22:27.401 "data_offset": 0, 00:22:27.401 "data_size": 65536 00:22:27.401 }, 00:22:27.401 { 00:22:27.401 "name": "BaseBdev3", 00:22:27.401 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:27.401 "is_configured": true, 00:22:27.401 "data_offset": 0, 00:22:27.401 "data_size": 65536 00:22:27.401 } 00:22:27.401 ] 00:22:27.401 }' 00:22:27.401 08:49:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.401 08:49:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:28.334 [2024-07-12 08:49:03.424075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.334 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.335 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.335 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.592 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.592 "name": "Existed_Raid", 00:22:28.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.592 "strip_size_kb": 0, 00:22:28.592 "state": "configuring", 00:22:28.592 "raid_level": "raid1", 00:22:28.592 "superblock": false, 00:22:28.592 "num_base_bdevs": 3, 00:22:28.592 "num_base_bdevs_discovered": 1, 00:22:28.592 "num_base_bdevs_operational": 3, 00:22:28.592 "base_bdevs_list": [ 00:22:28.592 { 00:22:28.592 "name": "BaseBdev1", 00:22:28.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.592 "is_configured": false, 00:22:28.592 "data_offset": 0, 00:22:28.592 "data_size": 0 00:22:28.592 }, 00:22:28.592 { 00:22:28.592 "name": null, 00:22:28.592 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:28.592 "is_configured": false, 00:22:28.592 "data_offset": 0, 00:22:28.592 "data_size": 65536 00:22:28.592 }, 00:22:28.592 { 00:22:28.592 "name": "BaseBdev3", 00:22:28.592 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:28.592 "is_configured": true, 00:22:28.592 "data_offset": 0, 00:22:28.592 "data_size": 65536 00:22:28.592 } 00:22:28.592 ] 00:22:28.592 }' 00:22:28.592 08:49:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.592 08:49:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.159 08:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.159 08:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:29.417 08:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:29.417 08:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:29.984 [2024-07-12 08:49:04.899350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.984 BaseBdev1 00:22:29.984 08:49:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:29.984 08:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:29.984 08:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:29.984 08:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:29.984 08:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:29.984 08:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:29.984 08:49:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:29.984 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:30.244 [ 00:22:30.244 { 00:22:30.244 "name": "BaseBdev1", 00:22:30.244 "aliases": [ 00:22:30.244 "d0ae6a26-9233-4ef0-8832-664660acb62b" 00:22:30.244 ], 00:22:30.244 "product_name": "Malloc disk", 00:22:30.244 "block_size": 512, 00:22:30.244 "num_blocks": 65536, 00:22:30.244 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:30.244 "assigned_rate_limits": { 00:22:30.244 "rw_ios_per_sec": 0, 00:22:30.244 "rw_mbytes_per_sec": 0, 00:22:30.244 "r_mbytes_per_sec": 0, 00:22:30.244 "w_mbytes_per_sec": 0 00:22:30.244 }, 00:22:30.244 "claimed": true, 00:22:30.244 "claim_type": "exclusive_write", 00:22:30.244 "zoned": false, 00:22:30.244 "supported_io_types": { 00:22:30.244 "read": true, 00:22:30.244 "write": true, 00:22:30.244 "unmap": true, 00:22:30.244 "flush": true, 00:22:30.244 "reset": true, 00:22:30.244 "nvme_admin": false, 00:22:30.244 "nvme_io": false, 00:22:30.244 "nvme_io_md": false, 00:22:30.244 "write_zeroes": true, 00:22:30.244 "zcopy": true, 00:22:30.244 "get_zone_info": false, 00:22:30.244 "zone_management": false, 00:22:30.244 "zone_append": false, 00:22:30.244 "compare": false, 00:22:30.244 "compare_and_write": false, 00:22:30.244 "abort": true, 00:22:30.244 "seek_hole": false, 00:22:30.244 "seek_data": false, 00:22:30.244 "copy": true, 00:22:30.244 "nvme_iov_md": false 00:22:30.244 }, 00:22:30.244 "memory_domains": [ 00:22:30.244 { 00:22:30.244 "dma_device_id": "system", 00:22:30.244 "dma_device_type": 1 00:22:30.244 }, 00:22:30.244 { 00:22:30.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.244 "dma_device_type": 2 00:22:30.244 } 00:22:30.244 ], 00:22:30.244 "driver_specific": {} 00:22:30.244 } 00:22:30.244 ] 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.244 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.812 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.812 "name": "Existed_Raid", 00:22:30.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.812 "strip_size_kb": 0, 00:22:30.812 "state": "configuring", 00:22:30.812 "raid_level": "raid1", 00:22:30.812 "superblock": false, 00:22:30.812 "num_base_bdevs": 3, 00:22:30.812 "num_base_bdevs_discovered": 2, 00:22:30.812 "num_base_bdevs_operational": 3, 00:22:30.812 "base_bdevs_list": [ 00:22:30.812 { 00:22:30.812 "name": "BaseBdev1", 00:22:30.812 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:30.812 "is_configured": true, 00:22:30.812 "data_offset": 0, 00:22:30.812 "data_size": 65536 00:22:30.812 }, 00:22:30.812 { 00:22:30.812 "name": null, 00:22:30.812 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:30.812 "is_configured": false, 00:22:30.812 "data_offset": 0, 00:22:30.812 "data_size": 65536 00:22:30.812 }, 00:22:30.812 { 00:22:30.812 "name": "BaseBdev3", 00:22:30.812 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:30.812 "is_configured": true, 00:22:30.812 "data_offset": 0, 00:22:30.812 "data_size": 65536 00:22:30.812 } 00:22:30.812 ] 00:22:30.812 }' 00:22:30.812 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.812 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.379 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.379 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:31.638 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:31.638 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:31.896 [2024-07-12 08:49:06.871978] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.896 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.154 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.154 "name": "Existed_Raid", 00:22:32.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.154 "strip_size_kb": 0, 00:22:32.154 "state": "configuring", 00:22:32.154 "raid_level": "raid1", 00:22:32.154 "superblock": false, 00:22:32.154 "num_base_bdevs": 3, 00:22:32.154 "num_base_bdevs_discovered": 1, 00:22:32.154 "num_base_bdevs_operational": 3, 00:22:32.154 "base_bdevs_list": [ 00:22:32.154 { 00:22:32.154 "name": "BaseBdev1", 00:22:32.154 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:32.154 "is_configured": true, 00:22:32.154 "data_offset": 0, 00:22:32.154 "data_size": 65536 00:22:32.154 }, 00:22:32.154 { 00:22:32.154 "name": null, 00:22:32.154 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:32.154 "is_configured": false, 00:22:32.154 "data_offset": 0, 00:22:32.154 "data_size": 65536 00:22:32.154 }, 00:22:32.154 { 00:22:32.154 "name": null, 00:22:32.154 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:32.154 "is_configured": false, 00:22:32.154 "data_offset": 0, 00:22:32.154 "data_size": 65536 00:22:32.154 } 00:22:32.154 ] 00:22:32.154 }' 00:22:32.154 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.154 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.721 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.721 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:32.979 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:32.979 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:33.237 [2024-07-12 08:49:08.256776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.237 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.495 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.495 "name": "Existed_Raid", 00:22:33.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.495 "strip_size_kb": 0, 00:22:33.495 "state": "configuring", 00:22:33.495 "raid_level": "raid1", 00:22:33.495 "superblock": false, 00:22:33.495 "num_base_bdevs": 3, 00:22:33.495 "num_base_bdevs_discovered": 2, 00:22:33.495 "num_base_bdevs_operational": 3, 00:22:33.495 "base_bdevs_list": [ 00:22:33.495 { 00:22:33.495 "name": "BaseBdev1", 00:22:33.495 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:33.495 "is_configured": true, 00:22:33.495 "data_offset": 0, 00:22:33.495 "data_size": 65536 00:22:33.495 }, 00:22:33.495 { 00:22:33.495 "name": null, 00:22:33.495 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:33.495 "is_configured": false, 00:22:33.495 "data_offset": 0, 00:22:33.495 "data_size": 65536 00:22:33.495 }, 00:22:33.495 { 00:22:33.495 "name": "BaseBdev3", 00:22:33.495 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:33.495 "is_configured": true, 00:22:33.495 "data_offset": 0, 00:22:33.495 "data_size": 65536 00:22:33.495 } 00:22:33.495 ] 00:22:33.495 }' 00:22:33.495 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.495 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.245 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.245 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:34.245 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:34.245 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:34.810 [2024-07-12 08:49:09.701265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.810 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.068 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.068 "name": "Existed_Raid", 00:22:35.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.069 "strip_size_kb": 0, 00:22:35.069 "state": "configuring", 00:22:35.069 "raid_level": "raid1", 00:22:35.069 "superblock": false, 00:22:35.069 "num_base_bdevs": 3, 00:22:35.069 "num_base_bdevs_discovered": 1, 00:22:35.069 "num_base_bdevs_operational": 3, 00:22:35.069 "base_bdevs_list": [ 00:22:35.069 { 00:22:35.069 "name": null, 00:22:35.069 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:35.069 "is_configured": false, 00:22:35.069 "data_offset": 0, 00:22:35.069 "data_size": 65536 00:22:35.069 }, 00:22:35.069 { 00:22:35.069 "name": null, 00:22:35.069 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:35.069 "is_configured": false, 00:22:35.069 "data_offset": 0, 00:22:35.069 "data_size": 65536 00:22:35.069 }, 00:22:35.069 { 00:22:35.069 "name": "BaseBdev3", 00:22:35.069 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:35.069 "is_configured": true, 00:22:35.069 "data_offset": 0, 00:22:35.069 "data_size": 65536 00:22:35.069 } 00:22:35.069 ] 00:22:35.069 }' 00:22:35.069 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.069 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.634 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.634 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:36.199 [2024-07-12 08:49:11.361067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.199 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.457 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.457 "name": "Existed_Raid", 00:22:36.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.457 "strip_size_kb": 0, 00:22:36.457 "state": "configuring", 00:22:36.458 "raid_level": "raid1", 00:22:36.458 "superblock": false, 00:22:36.458 "num_base_bdevs": 3, 00:22:36.458 "num_base_bdevs_discovered": 2, 00:22:36.458 "num_base_bdevs_operational": 3, 00:22:36.458 "base_bdevs_list": [ 00:22:36.458 { 00:22:36.458 "name": null, 00:22:36.458 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:36.458 "is_configured": false, 00:22:36.458 "data_offset": 0, 00:22:36.458 "data_size": 65536 00:22:36.458 }, 00:22:36.458 { 00:22:36.458 "name": "BaseBdev2", 00:22:36.458 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:36.458 "is_configured": true, 00:22:36.458 "data_offset": 0, 00:22:36.458 "data_size": 65536 00:22:36.458 }, 00:22:36.458 { 00:22:36.458 "name": "BaseBdev3", 00:22:36.458 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:36.458 "is_configured": true, 00:22:36.458 "data_offset": 0, 00:22:36.458 "data_size": 65536 00:22:36.458 } 00:22:36.458 ] 00:22:36.458 }' 00:22:36.458 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.458 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.393 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.393 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:37.652 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:37.652 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.652 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:37.911 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d0ae6a26-9233-4ef0-8832-664660acb62b 00:22:38.168 [2024-07-12 08:49:13.117011] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:38.168 [2024-07-12 08:49:13.117078] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:38.168 [2024-07-12 08:49:13.117089] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:38.168 [2024-07-12 08:49:13.117233] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:38.168 [2024-07-12 08:49:13.117586] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:38.168 [2024-07-12 08:49:13.117611] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:22:38.168 [2024-07-12 08:49:13.117875] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.168 NewBaseBdev 00:22:38.168 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:38.168 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:38.168 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:38.168 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:38.168 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:38.168 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:38.168 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:38.426 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:38.426 [ 00:22:38.426 { 00:22:38.426 "name": "NewBaseBdev", 00:22:38.426 "aliases": [ 00:22:38.426 "d0ae6a26-9233-4ef0-8832-664660acb62b" 00:22:38.426 ], 00:22:38.426 "product_name": "Malloc disk", 00:22:38.426 "block_size": 512, 00:22:38.426 "num_blocks": 65536, 00:22:38.426 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:38.426 "assigned_rate_limits": { 00:22:38.426 "rw_ios_per_sec": 0, 00:22:38.426 "rw_mbytes_per_sec": 0, 00:22:38.426 "r_mbytes_per_sec": 0, 00:22:38.426 "w_mbytes_per_sec": 0 00:22:38.426 }, 00:22:38.426 "claimed": true, 00:22:38.426 "claim_type": "exclusive_write", 00:22:38.426 "zoned": false, 00:22:38.426 "supported_io_types": { 00:22:38.426 "read": true, 00:22:38.426 "write": true, 00:22:38.426 "unmap": true, 00:22:38.426 "flush": true, 00:22:38.426 "reset": true, 00:22:38.426 "nvme_admin": false, 00:22:38.427 "nvme_io": false, 00:22:38.427 "nvme_io_md": false, 00:22:38.427 "write_zeroes": true, 00:22:38.427 "zcopy": true, 00:22:38.427 "get_zone_info": false, 00:22:38.427 "zone_management": false, 00:22:38.427 "zone_append": false, 00:22:38.427 "compare": false, 00:22:38.427 "compare_and_write": false, 00:22:38.427 "abort": true, 00:22:38.427 "seek_hole": false, 00:22:38.427 "seek_data": false, 00:22:38.427 "copy": true, 00:22:38.427 "nvme_iov_md": false 00:22:38.427 }, 00:22:38.427 "memory_domains": [ 00:22:38.427 { 00:22:38.427 "dma_device_id": "system", 00:22:38.427 "dma_device_type": 1 00:22:38.427 }, 00:22:38.427 { 00:22:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.427 "dma_device_type": 2 00:22:38.427 } 00:22:38.427 ], 00:22:38.427 "driver_specific": {} 00:22:38.427 } 00:22:38.427 ] 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.427 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.685 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.685 "name": "Existed_Raid", 00:22:38.685 "uuid": "7a452c56-1f2d-4443-a609-249c5bfa3583", 00:22:38.685 "strip_size_kb": 0, 00:22:38.685 "state": "online", 00:22:38.685 "raid_level": "raid1", 00:22:38.685 "superblock": false, 00:22:38.685 "num_base_bdevs": 3, 00:22:38.685 "num_base_bdevs_discovered": 3, 00:22:38.685 "num_base_bdevs_operational": 3, 00:22:38.685 "base_bdevs_list": [ 00:22:38.685 { 00:22:38.685 "name": "NewBaseBdev", 00:22:38.685 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:38.685 "is_configured": true, 00:22:38.685 "data_offset": 0, 00:22:38.685 "data_size": 65536 00:22:38.685 }, 00:22:38.685 { 00:22:38.685 "name": "BaseBdev2", 00:22:38.685 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:38.685 "is_configured": true, 00:22:38.685 "data_offset": 0, 00:22:38.685 "data_size": 65536 00:22:38.685 }, 00:22:38.685 { 00:22:38.685 "name": "BaseBdev3", 00:22:38.685 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:38.685 "is_configured": true, 00:22:38.685 "data_offset": 0, 00:22:38.685 "data_size": 65536 00:22:38.685 } 00:22:38.685 ] 00:22:38.685 }' 00:22:38.685 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.685 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:39.622 [2024-07-12 08:49:14.757782] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:39.622 "name": "Existed_Raid", 00:22:39.622 "aliases": [ 00:22:39.622 "7a452c56-1f2d-4443-a609-249c5bfa3583" 00:22:39.622 ], 00:22:39.622 "product_name": "Raid Volume", 00:22:39.622 "block_size": 512, 00:22:39.622 "num_blocks": 65536, 00:22:39.622 "uuid": "7a452c56-1f2d-4443-a609-249c5bfa3583", 00:22:39.622 "assigned_rate_limits": { 00:22:39.622 "rw_ios_per_sec": 0, 00:22:39.622 "rw_mbytes_per_sec": 0, 00:22:39.622 "r_mbytes_per_sec": 0, 00:22:39.622 "w_mbytes_per_sec": 0 00:22:39.622 }, 00:22:39.622 "claimed": false, 00:22:39.622 "zoned": false, 00:22:39.622 "supported_io_types": { 00:22:39.622 "read": true, 00:22:39.622 "write": true, 00:22:39.622 "unmap": false, 00:22:39.622 "flush": false, 00:22:39.622 "reset": true, 00:22:39.622 "nvme_admin": false, 00:22:39.622 "nvme_io": false, 00:22:39.622 "nvme_io_md": false, 00:22:39.622 "write_zeroes": true, 00:22:39.622 "zcopy": false, 00:22:39.622 "get_zone_info": false, 00:22:39.622 "zone_management": false, 00:22:39.622 "zone_append": false, 00:22:39.622 "compare": false, 00:22:39.622 "compare_and_write": false, 00:22:39.622 "abort": false, 00:22:39.622 "seek_hole": false, 00:22:39.622 "seek_data": false, 00:22:39.622 "copy": false, 00:22:39.622 "nvme_iov_md": false 00:22:39.622 }, 00:22:39.622 "memory_domains": [ 00:22:39.622 { 00:22:39.622 "dma_device_id": "system", 00:22:39.622 "dma_device_type": 1 00:22:39.622 }, 00:22:39.622 { 00:22:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.622 "dma_device_type": 2 00:22:39.622 }, 00:22:39.622 { 00:22:39.622 "dma_device_id": "system", 00:22:39.622 "dma_device_type": 1 00:22:39.622 }, 00:22:39.622 { 00:22:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.622 "dma_device_type": 2 00:22:39.622 }, 00:22:39.622 { 00:22:39.622 "dma_device_id": "system", 00:22:39.622 "dma_device_type": 1 00:22:39.622 }, 00:22:39.622 { 00:22:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.622 "dma_device_type": 2 00:22:39.622 } 00:22:39.622 ], 00:22:39.622 "driver_specific": { 00:22:39.622 "raid": { 00:22:39.622 "uuid": "7a452c56-1f2d-4443-a609-249c5bfa3583", 00:22:39.622 "strip_size_kb": 0, 00:22:39.622 "state": "online", 00:22:39.622 "raid_level": "raid1", 00:22:39.622 "superblock": false, 00:22:39.622 "num_base_bdevs": 3, 00:22:39.622 "num_base_bdevs_discovered": 3, 00:22:39.622 "num_base_bdevs_operational": 3, 00:22:39.622 "base_bdevs_list": [ 00:22:39.622 { 00:22:39.622 "name": "NewBaseBdev", 00:22:39.622 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:39.622 "is_configured": true, 00:22:39.622 "data_offset": 0, 00:22:39.622 "data_size": 65536 00:22:39.622 }, 00:22:39.622 { 00:22:39.622 "name": "BaseBdev2", 00:22:39.622 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:39.622 "is_configured": true, 00:22:39.622 "data_offset": 0, 00:22:39.622 "data_size": 65536 00:22:39.622 }, 00:22:39.622 { 00:22:39.622 "name": "BaseBdev3", 00:22:39.622 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:39.622 "is_configured": true, 00:22:39.622 "data_offset": 0, 00:22:39.622 "data_size": 65536 00:22:39.622 } 00:22:39.622 ] 00:22:39.622 } 00:22:39.622 } 00:22:39.622 }' 00:22:39.622 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:39.881 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:39.881 BaseBdev2 00:22:39.881 BaseBdev3' 00:22:39.881 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:39.881 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:39.881 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:40.140 "name": "NewBaseBdev", 00:22:40.140 "aliases": [ 00:22:40.140 "d0ae6a26-9233-4ef0-8832-664660acb62b" 00:22:40.140 ], 00:22:40.140 "product_name": "Malloc disk", 00:22:40.140 "block_size": 512, 00:22:40.140 "num_blocks": 65536, 00:22:40.140 "uuid": "d0ae6a26-9233-4ef0-8832-664660acb62b", 00:22:40.140 "assigned_rate_limits": { 00:22:40.140 "rw_ios_per_sec": 0, 00:22:40.140 "rw_mbytes_per_sec": 0, 00:22:40.140 "r_mbytes_per_sec": 0, 00:22:40.140 "w_mbytes_per_sec": 0 00:22:40.140 }, 00:22:40.140 "claimed": true, 00:22:40.140 "claim_type": "exclusive_write", 00:22:40.140 "zoned": false, 00:22:40.140 "supported_io_types": { 00:22:40.140 "read": true, 00:22:40.140 "write": true, 00:22:40.140 "unmap": true, 00:22:40.140 "flush": true, 00:22:40.140 "reset": true, 00:22:40.140 "nvme_admin": false, 00:22:40.140 "nvme_io": false, 00:22:40.140 "nvme_io_md": false, 00:22:40.140 "write_zeroes": true, 00:22:40.140 "zcopy": true, 00:22:40.140 "get_zone_info": false, 00:22:40.140 "zone_management": false, 00:22:40.140 "zone_append": false, 00:22:40.140 "compare": false, 00:22:40.140 "compare_and_write": false, 00:22:40.140 "abort": true, 00:22:40.140 "seek_hole": false, 00:22:40.140 "seek_data": false, 00:22:40.140 "copy": true, 00:22:40.140 "nvme_iov_md": false 00:22:40.140 }, 00:22:40.140 "memory_domains": [ 00:22:40.140 { 00:22:40.140 "dma_device_id": "system", 00:22:40.140 "dma_device_type": 1 00:22:40.140 }, 00:22:40.140 { 00:22:40.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.140 "dma_device_type": 2 00:22:40.140 } 00:22:40.140 ], 00:22:40.140 "driver_specific": {} 00:22:40.140 }' 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:40.140 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:40.399 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:40.659 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:40.659 "name": "BaseBdev2", 00:22:40.659 "aliases": [ 00:22:40.659 "b5782785-99a8-4a94-a34a-d94b482f66ab" 00:22:40.659 ], 00:22:40.659 "product_name": "Malloc disk", 00:22:40.659 "block_size": 512, 00:22:40.659 "num_blocks": 65536, 00:22:40.659 "uuid": "b5782785-99a8-4a94-a34a-d94b482f66ab", 00:22:40.659 "assigned_rate_limits": { 00:22:40.659 "rw_ios_per_sec": 0, 00:22:40.659 "rw_mbytes_per_sec": 0, 00:22:40.659 "r_mbytes_per_sec": 0, 00:22:40.659 "w_mbytes_per_sec": 0 00:22:40.659 }, 00:22:40.659 "claimed": true, 00:22:40.659 "claim_type": "exclusive_write", 00:22:40.659 "zoned": false, 00:22:40.659 "supported_io_types": { 00:22:40.659 "read": true, 00:22:40.659 "write": true, 00:22:40.659 "unmap": true, 00:22:40.659 "flush": true, 00:22:40.659 "reset": true, 00:22:40.659 "nvme_admin": false, 00:22:40.659 "nvme_io": false, 00:22:40.659 "nvme_io_md": false, 00:22:40.659 "write_zeroes": true, 00:22:40.659 "zcopy": true, 00:22:40.659 "get_zone_info": false, 00:22:40.659 "zone_management": false, 00:22:40.659 "zone_append": false, 00:22:40.659 "compare": false, 00:22:40.659 "compare_and_write": false, 00:22:40.659 "abort": true, 00:22:40.659 "seek_hole": false, 00:22:40.659 "seek_data": false, 00:22:40.659 "copy": true, 00:22:40.659 "nvme_iov_md": false 00:22:40.659 }, 00:22:40.659 "memory_domains": [ 00:22:40.659 { 00:22:40.659 "dma_device_id": "system", 00:22:40.659 "dma_device_type": 1 00:22:40.659 }, 00:22:40.659 { 00:22:40.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.659 "dma_device_type": 2 00:22:40.659 } 00:22:40.659 ], 00:22:40.659 "driver_specific": {} 00:22:40.659 }' 00:22:40.659 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.917 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:40.917 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:40.917 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.917 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:40.917 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:40.917 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:40.917 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.174 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:41.174 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.174 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.174 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:41.174 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:41.174 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:41.174 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:41.431 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:41.431 "name": "BaseBdev3", 00:22:41.431 "aliases": [ 00:22:41.431 "3c9a11b2-34a5-4597-a1cd-171e4fd67554" 00:22:41.431 ], 00:22:41.431 "product_name": "Malloc disk", 00:22:41.431 "block_size": 512, 00:22:41.431 "num_blocks": 65536, 00:22:41.431 "uuid": "3c9a11b2-34a5-4597-a1cd-171e4fd67554", 00:22:41.431 "assigned_rate_limits": { 00:22:41.431 "rw_ios_per_sec": 0, 00:22:41.432 "rw_mbytes_per_sec": 0, 00:22:41.432 "r_mbytes_per_sec": 0, 00:22:41.432 "w_mbytes_per_sec": 0 00:22:41.432 }, 00:22:41.432 "claimed": true, 00:22:41.432 "claim_type": "exclusive_write", 00:22:41.432 "zoned": false, 00:22:41.432 "supported_io_types": { 00:22:41.432 "read": true, 00:22:41.432 "write": true, 00:22:41.432 "unmap": true, 00:22:41.432 "flush": true, 00:22:41.432 "reset": true, 00:22:41.432 "nvme_admin": false, 00:22:41.432 "nvme_io": false, 00:22:41.432 "nvme_io_md": false, 00:22:41.432 "write_zeroes": true, 00:22:41.432 "zcopy": true, 00:22:41.432 "get_zone_info": false, 00:22:41.432 "zone_management": false, 00:22:41.432 "zone_append": false, 00:22:41.432 "compare": false, 00:22:41.432 "compare_and_write": false, 00:22:41.432 "abort": true, 00:22:41.432 "seek_hole": false, 00:22:41.432 "seek_data": false, 00:22:41.432 "copy": true, 00:22:41.432 "nvme_iov_md": false 00:22:41.432 }, 00:22:41.432 "memory_domains": [ 00:22:41.432 { 00:22:41.432 "dma_device_id": "system", 00:22:41.432 "dma_device_type": 1 00:22:41.432 }, 00:22:41.432 { 00:22:41.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.432 "dma_device_type": 2 00:22:41.432 } 00:22:41.432 ], 00:22:41.432 "driver_specific": {} 00:22:41.432 }' 00:22:41.432 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.432 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:41.432 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:41.432 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.688 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:41.689 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:41.689 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.689 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:41.689 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:41.689 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.946 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:41.946 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:41.946 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:42.203 [2024-07-12 08:49:17.166106] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:42.203 [2024-07-12 08:49:17.166162] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.203 [2024-07-12 08:49:17.166279] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.203 [2024-07-12 08:49:17.166630] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.203 [2024-07-12 08:49:17.166671] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 132268 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 132268 ']' 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 132268 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132268 00:22:42.203 killing process with pid 132268 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132268' 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 132268 00:22:42.203 08:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 132268 00:22:42.203 [2024-07-12 08:49:17.201131] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:42.461 [2024-07-12 08:49:17.443443] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:43.834 ************************************ 00:22:43.834 END TEST raid_state_function_test 00:22:43.834 ************************************ 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:43.834 00:22:43.834 real 0m33.041s 00:22:43.834 user 1m1.779s 00:22:43.834 sys 0m3.747s 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.834 08:49:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:43.834 08:49:18 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:22:43.834 08:49:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:43.834 08:49:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.834 08:49:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:43.834 ************************************ 00:22:43.834 START TEST raid_state_function_test_sb 00:22:43.834 ************************************ 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=133333 00:22:43.834 Process raid pid: 133333 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 133333' 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 133333 /var/tmp/spdk-raid.sock 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 133333 ']' 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:43.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.834 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.834 [2024-07-12 08:49:18.733323] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:22:43.834 [2024-07-12 08:49:18.733582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.834 [2024-07-12 08:49:18.906356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.091 [2024-07-12 08:49:19.160377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.348 [2024-07-12 08:49:19.374860] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:44.606 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.606 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:44.606 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:44.864 [2024-07-12 08:49:20.038306] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.864 [2024-07-12 08:49:20.038450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.864 [2024-07-12 08:49:20.038468] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.864 [2024-07-12 08:49:20.038517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.864 [2024-07-12 08:49:20.038526] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.864 [2024-07-12 08:49:20.038579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.864 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.431 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.431 "name": "Existed_Raid", 00:22:45.431 "uuid": "bf129cff-757f-4160-8edb-f9ed8339f733", 00:22:45.431 "strip_size_kb": 0, 00:22:45.431 "state": "configuring", 00:22:45.431 "raid_level": "raid1", 00:22:45.431 "superblock": true, 00:22:45.431 "num_base_bdevs": 3, 00:22:45.431 "num_base_bdevs_discovered": 0, 00:22:45.431 "num_base_bdevs_operational": 3, 00:22:45.431 "base_bdevs_list": [ 00:22:45.431 { 00:22:45.431 "name": "BaseBdev1", 00:22:45.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.431 "is_configured": false, 00:22:45.431 "data_offset": 0, 00:22:45.431 "data_size": 0 00:22:45.431 }, 00:22:45.431 { 00:22:45.431 "name": "BaseBdev2", 00:22:45.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.431 "is_configured": false, 00:22:45.431 "data_offset": 0, 00:22:45.431 "data_size": 0 00:22:45.431 }, 00:22:45.431 { 00:22:45.431 "name": "BaseBdev3", 00:22:45.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.431 "is_configured": false, 00:22:45.431 "data_offset": 0, 00:22:45.431 "data_size": 0 00:22:45.431 } 00:22:45.431 ] 00:22:45.431 }' 00:22:45.431 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.431 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.996 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:46.261 [2024-07-12 08:49:21.274544] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:46.261 [2024-07-12 08:49:21.274627] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:46.261 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:46.519 [2024-07-12 08:49:21.558720] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:46.519 [2024-07-12 08:49:21.558817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:46.519 [2024-07-12 08:49:21.558847] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.519 [2024-07-12 08:49:21.558866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.519 [2024-07-12 08:49:21.558874] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.519 [2024-07-12 08:49:21.558896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.519 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:46.777 [2024-07-12 08:49:21.832234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.777 BaseBdev1 00:22:46.777 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:46.777 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:46.777 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:46.777 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:46.777 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:46.777 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:46.777 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.035 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:47.293 [ 00:22:47.293 { 00:22:47.293 "name": "BaseBdev1", 00:22:47.293 "aliases": [ 00:22:47.293 "e1584aae-590d-43b5-8e0e-17aa59d474fb" 00:22:47.293 ], 00:22:47.293 "product_name": "Malloc disk", 00:22:47.293 "block_size": 512, 00:22:47.293 "num_blocks": 65536, 00:22:47.293 "uuid": "e1584aae-590d-43b5-8e0e-17aa59d474fb", 00:22:47.293 "assigned_rate_limits": { 00:22:47.293 "rw_ios_per_sec": 0, 00:22:47.293 "rw_mbytes_per_sec": 0, 00:22:47.293 "r_mbytes_per_sec": 0, 00:22:47.293 "w_mbytes_per_sec": 0 00:22:47.293 }, 00:22:47.293 "claimed": true, 00:22:47.293 "claim_type": "exclusive_write", 00:22:47.293 "zoned": false, 00:22:47.293 "supported_io_types": { 00:22:47.293 "read": true, 00:22:47.293 "write": true, 00:22:47.293 "unmap": true, 00:22:47.293 "flush": true, 00:22:47.293 "reset": true, 00:22:47.293 "nvme_admin": false, 00:22:47.293 "nvme_io": false, 00:22:47.293 "nvme_io_md": false, 00:22:47.293 "write_zeroes": true, 00:22:47.293 "zcopy": true, 00:22:47.293 "get_zone_info": false, 00:22:47.293 "zone_management": false, 00:22:47.293 "zone_append": false, 00:22:47.293 "compare": false, 00:22:47.293 "compare_and_write": false, 00:22:47.293 "abort": true, 00:22:47.293 "seek_hole": false, 00:22:47.293 "seek_data": false, 00:22:47.293 "copy": true, 00:22:47.293 "nvme_iov_md": false 00:22:47.293 }, 00:22:47.293 "memory_domains": [ 00:22:47.293 { 00:22:47.293 "dma_device_id": "system", 00:22:47.293 "dma_device_type": 1 00:22:47.293 }, 00:22:47.293 { 00:22:47.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.293 "dma_device_type": 2 00:22:47.293 } 00:22:47.293 ], 00:22:47.293 "driver_specific": {} 00:22:47.293 } 00:22:47.293 ] 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.293 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.552 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.552 "name": "Existed_Raid", 00:22:47.552 "uuid": "ec469b11-6577-487e-addd-e7218814db85", 00:22:47.552 "strip_size_kb": 0, 00:22:47.552 "state": "configuring", 00:22:47.552 "raid_level": "raid1", 00:22:47.552 "superblock": true, 00:22:47.552 "num_base_bdevs": 3, 00:22:47.552 "num_base_bdevs_discovered": 1, 00:22:47.552 "num_base_bdevs_operational": 3, 00:22:47.552 "base_bdevs_list": [ 00:22:47.552 { 00:22:47.552 "name": "BaseBdev1", 00:22:47.552 "uuid": "e1584aae-590d-43b5-8e0e-17aa59d474fb", 00:22:47.552 "is_configured": true, 00:22:47.552 "data_offset": 2048, 00:22:47.552 "data_size": 63488 00:22:47.552 }, 00:22:47.552 { 00:22:47.552 "name": "BaseBdev2", 00:22:47.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.552 "is_configured": false, 00:22:47.552 "data_offset": 0, 00:22:47.552 "data_size": 0 00:22:47.552 }, 00:22:47.552 { 00:22:47.552 "name": "BaseBdev3", 00:22:47.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.552 "is_configured": false, 00:22:47.552 "data_offset": 0, 00:22:47.552 "data_size": 0 00:22:47.552 } 00:22:47.552 ] 00:22:47.552 }' 00:22:47.552 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.552 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.486 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:48.486 [2024-07-12 08:49:23.580862] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:48.486 [2024-07-12 08:49:23.580945] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:22:48.486 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:48.752 [2024-07-12 08:49:23.821027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.752 [2024-07-12 08:49:23.823345] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.752 [2024-07-12 08:49:23.823446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.752 [2024-07-12 08:49:23.823477] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:48.752 [2024-07-12 08:49:23.823525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.752 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.014 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.014 "name": "Existed_Raid", 00:22:49.014 "uuid": "7b455932-df48-4209-912a-437efd40bee1", 00:22:49.014 "strip_size_kb": 0, 00:22:49.014 "state": "configuring", 00:22:49.014 "raid_level": "raid1", 00:22:49.014 "superblock": true, 00:22:49.014 "num_base_bdevs": 3, 00:22:49.014 "num_base_bdevs_discovered": 1, 00:22:49.014 "num_base_bdevs_operational": 3, 00:22:49.014 "base_bdevs_list": [ 00:22:49.014 { 00:22:49.014 "name": "BaseBdev1", 00:22:49.014 "uuid": "e1584aae-590d-43b5-8e0e-17aa59d474fb", 00:22:49.014 "is_configured": true, 00:22:49.014 "data_offset": 2048, 00:22:49.014 "data_size": 63488 00:22:49.014 }, 00:22:49.014 { 00:22:49.014 "name": "BaseBdev2", 00:22:49.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.014 "is_configured": false, 00:22:49.014 "data_offset": 0, 00:22:49.014 "data_size": 0 00:22:49.014 }, 00:22:49.014 { 00:22:49.014 "name": "BaseBdev3", 00:22:49.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.014 "is_configured": false, 00:22:49.014 "data_offset": 0, 00:22:49.014 "data_size": 0 00:22:49.014 } 00:22:49.014 ] 00:22:49.014 }' 00:22:49.014 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.014 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.947 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.947 [2024-07-12 08:49:25.104687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.947 BaseBdev2 00:22:49.947 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:49.947 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:49.947 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:49.947 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:49.947 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:49.947 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:49.947 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:50.205 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:50.771 [ 00:22:50.771 { 00:22:50.771 "name": "BaseBdev2", 00:22:50.771 "aliases": [ 00:22:50.771 "5650dcca-e5f7-4224-8847-adfcf7b2a7ff" 00:22:50.771 ], 00:22:50.771 "product_name": "Malloc disk", 00:22:50.771 "block_size": 512, 00:22:50.771 "num_blocks": 65536, 00:22:50.771 "uuid": "5650dcca-e5f7-4224-8847-adfcf7b2a7ff", 00:22:50.771 "assigned_rate_limits": { 00:22:50.771 "rw_ios_per_sec": 0, 00:22:50.771 "rw_mbytes_per_sec": 0, 00:22:50.771 "r_mbytes_per_sec": 0, 00:22:50.771 "w_mbytes_per_sec": 0 00:22:50.771 }, 00:22:50.771 "claimed": true, 00:22:50.771 "claim_type": "exclusive_write", 00:22:50.771 "zoned": false, 00:22:50.771 "supported_io_types": { 00:22:50.771 "read": true, 00:22:50.771 "write": true, 00:22:50.771 "unmap": true, 00:22:50.771 "flush": true, 00:22:50.771 "reset": true, 00:22:50.771 "nvme_admin": false, 00:22:50.771 "nvme_io": false, 00:22:50.771 "nvme_io_md": false, 00:22:50.771 "write_zeroes": true, 00:22:50.771 "zcopy": true, 00:22:50.771 "get_zone_info": false, 00:22:50.771 "zone_management": false, 00:22:50.771 "zone_append": false, 00:22:50.771 "compare": false, 00:22:50.771 "compare_and_write": false, 00:22:50.771 "abort": true, 00:22:50.771 "seek_hole": false, 00:22:50.771 "seek_data": false, 00:22:50.771 "copy": true, 00:22:50.771 "nvme_iov_md": false 00:22:50.771 }, 00:22:50.771 "memory_domains": [ 00:22:50.771 { 00:22:50.771 "dma_device_id": "system", 00:22:50.771 "dma_device_type": 1 00:22:50.771 }, 00:22:50.771 { 00:22:50.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.771 "dma_device_type": 2 00:22:50.771 } 00:22:50.771 ], 00:22:50.771 "driver_specific": {} 00:22:50.771 } 00:22:50.771 ] 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.771 "name": "Existed_Raid", 00:22:50.771 "uuid": "7b455932-df48-4209-912a-437efd40bee1", 00:22:50.771 "strip_size_kb": 0, 00:22:50.771 "state": "configuring", 00:22:50.771 "raid_level": "raid1", 00:22:50.771 "superblock": true, 00:22:50.771 "num_base_bdevs": 3, 00:22:50.771 "num_base_bdevs_discovered": 2, 00:22:50.771 "num_base_bdevs_operational": 3, 00:22:50.771 "base_bdevs_list": [ 00:22:50.771 { 00:22:50.771 "name": "BaseBdev1", 00:22:50.771 "uuid": "e1584aae-590d-43b5-8e0e-17aa59d474fb", 00:22:50.771 "is_configured": true, 00:22:50.771 "data_offset": 2048, 00:22:50.771 "data_size": 63488 00:22:50.771 }, 00:22:50.771 { 00:22:50.771 "name": "BaseBdev2", 00:22:50.771 "uuid": "5650dcca-e5f7-4224-8847-adfcf7b2a7ff", 00:22:50.771 "is_configured": true, 00:22:50.771 "data_offset": 2048, 00:22:50.771 "data_size": 63488 00:22:50.771 }, 00:22:50.771 { 00:22:50.771 "name": "BaseBdev3", 00:22:50.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.771 "is_configured": false, 00:22:50.771 "data_offset": 0, 00:22:50.771 "data_size": 0 00:22:50.771 } 00:22:50.771 ] 00:22:50.771 }' 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.771 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.706 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:51.964 [2024-07-12 08:49:26.964171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.964 [2024-07-12 08:49:26.964595] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:22:51.964 [2024-07-12 08:49:26.964633] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:51.964 [2024-07-12 08:49:26.964820] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:51.964 BaseBdev3 00:22:51.964 [2024-07-12 08:49:26.965334] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:22:51.964 [2024-07-12 08:49:26.965365] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:22:51.964 [2024-07-12 08:49:26.965571] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.964 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:51.964 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:51.964 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:51.964 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:51.964 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:51.964 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:51.964 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:52.223 08:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:52.481 [ 00:22:52.481 { 00:22:52.481 "name": "BaseBdev3", 00:22:52.481 "aliases": [ 00:22:52.481 "c80c9149-e46d-4cbc-9934-3b6a3664fcc6" 00:22:52.481 ], 00:22:52.481 "product_name": "Malloc disk", 00:22:52.481 "block_size": 512, 00:22:52.481 "num_blocks": 65536, 00:22:52.481 "uuid": "c80c9149-e46d-4cbc-9934-3b6a3664fcc6", 00:22:52.481 "assigned_rate_limits": { 00:22:52.481 "rw_ios_per_sec": 0, 00:22:52.481 "rw_mbytes_per_sec": 0, 00:22:52.481 "r_mbytes_per_sec": 0, 00:22:52.481 "w_mbytes_per_sec": 0 00:22:52.481 }, 00:22:52.481 "claimed": true, 00:22:52.481 "claim_type": "exclusive_write", 00:22:52.481 "zoned": false, 00:22:52.481 "supported_io_types": { 00:22:52.481 "read": true, 00:22:52.481 "write": true, 00:22:52.481 "unmap": true, 00:22:52.481 "flush": true, 00:22:52.481 "reset": true, 00:22:52.481 "nvme_admin": false, 00:22:52.481 "nvme_io": false, 00:22:52.481 "nvme_io_md": false, 00:22:52.481 "write_zeroes": true, 00:22:52.481 "zcopy": true, 00:22:52.481 "get_zone_info": false, 00:22:52.481 "zone_management": false, 00:22:52.481 "zone_append": false, 00:22:52.481 "compare": false, 00:22:52.481 "compare_and_write": false, 00:22:52.481 "abort": true, 00:22:52.481 "seek_hole": false, 00:22:52.481 "seek_data": false, 00:22:52.481 "copy": true, 00:22:52.481 "nvme_iov_md": false 00:22:52.481 }, 00:22:52.481 "memory_domains": [ 00:22:52.481 { 00:22:52.481 "dma_device_id": "system", 00:22:52.481 "dma_device_type": 1 00:22:52.481 }, 00:22:52.481 { 00:22:52.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.481 "dma_device_type": 2 00:22:52.481 } 00:22:52.481 ], 00:22:52.481 "driver_specific": {} 00:22:52.481 } 00:22:52.481 ] 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.481 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.739 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.739 "name": "Existed_Raid", 00:22:52.739 "uuid": "7b455932-df48-4209-912a-437efd40bee1", 00:22:52.739 "strip_size_kb": 0, 00:22:52.739 "state": "online", 00:22:52.739 "raid_level": "raid1", 00:22:52.739 "superblock": true, 00:22:52.739 "num_base_bdevs": 3, 00:22:52.739 "num_base_bdevs_discovered": 3, 00:22:52.739 "num_base_bdevs_operational": 3, 00:22:52.739 "base_bdevs_list": [ 00:22:52.739 { 00:22:52.739 "name": "BaseBdev1", 00:22:52.739 "uuid": "e1584aae-590d-43b5-8e0e-17aa59d474fb", 00:22:52.739 "is_configured": true, 00:22:52.739 "data_offset": 2048, 00:22:52.739 "data_size": 63488 00:22:52.739 }, 00:22:52.739 { 00:22:52.739 "name": "BaseBdev2", 00:22:52.739 "uuid": "5650dcca-e5f7-4224-8847-adfcf7b2a7ff", 00:22:52.739 "is_configured": true, 00:22:52.739 "data_offset": 2048, 00:22:52.739 "data_size": 63488 00:22:52.739 }, 00:22:52.739 { 00:22:52.739 "name": "BaseBdev3", 00:22:52.739 "uuid": "c80c9149-e46d-4cbc-9934-3b6a3664fcc6", 00:22:52.739 "is_configured": true, 00:22:52.739 "data_offset": 2048, 00:22:52.739 "data_size": 63488 00:22:52.739 } 00:22:52.739 ] 00:22:52.739 }' 00:22:52.739 08:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.739 08:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:53.322 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:53.581 [2024-07-12 08:49:28.749141] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:53.581 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:53.581 "name": "Existed_Raid", 00:22:53.581 "aliases": [ 00:22:53.581 "7b455932-df48-4209-912a-437efd40bee1" 00:22:53.581 ], 00:22:53.581 "product_name": "Raid Volume", 00:22:53.581 "block_size": 512, 00:22:53.581 "num_blocks": 63488, 00:22:53.581 "uuid": "7b455932-df48-4209-912a-437efd40bee1", 00:22:53.581 "assigned_rate_limits": { 00:22:53.581 "rw_ios_per_sec": 0, 00:22:53.581 "rw_mbytes_per_sec": 0, 00:22:53.581 "r_mbytes_per_sec": 0, 00:22:53.581 "w_mbytes_per_sec": 0 00:22:53.581 }, 00:22:53.581 "claimed": false, 00:22:53.581 "zoned": false, 00:22:53.581 "supported_io_types": { 00:22:53.581 "read": true, 00:22:53.581 "write": true, 00:22:53.581 "unmap": false, 00:22:53.581 "flush": false, 00:22:53.581 "reset": true, 00:22:53.581 "nvme_admin": false, 00:22:53.581 "nvme_io": false, 00:22:53.581 "nvme_io_md": false, 00:22:53.581 "write_zeroes": true, 00:22:53.581 "zcopy": false, 00:22:53.581 "get_zone_info": false, 00:22:53.581 "zone_management": false, 00:22:53.581 "zone_append": false, 00:22:53.581 "compare": false, 00:22:53.581 "compare_and_write": false, 00:22:53.581 "abort": false, 00:22:53.581 "seek_hole": false, 00:22:53.581 "seek_data": false, 00:22:53.581 "copy": false, 00:22:53.581 "nvme_iov_md": false 00:22:53.581 }, 00:22:53.581 "memory_domains": [ 00:22:53.581 { 00:22:53.581 "dma_device_id": "system", 00:22:53.581 "dma_device_type": 1 00:22:53.581 }, 00:22:53.581 { 00:22:53.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.581 "dma_device_type": 2 00:22:53.581 }, 00:22:53.581 { 00:22:53.581 "dma_device_id": "system", 00:22:53.581 "dma_device_type": 1 00:22:53.581 }, 00:22:53.581 { 00:22:53.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.581 "dma_device_type": 2 00:22:53.581 }, 00:22:53.581 { 00:22:53.581 "dma_device_id": "system", 00:22:53.581 "dma_device_type": 1 00:22:53.581 }, 00:22:53.581 { 00:22:53.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.581 "dma_device_type": 2 00:22:53.581 } 00:22:53.581 ], 00:22:53.581 "driver_specific": { 00:22:53.581 "raid": { 00:22:53.581 "uuid": "7b455932-df48-4209-912a-437efd40bee1", 00:22:53.581 "strip_size_kb": 0, 00:22:53.581 "state": "online", 00:22:53.581 "raid_level": "raid1", 00:22:53.581 "superblock": true, 00:22:53.581 "num_base_bdevs": 3, 00:22:53.581 "num_base_bdevs_discovered": 3, 00:22:53.581 "num_base_bdevs_operational": 3, 00:22:53.581 "base_bdevs_list": [ 00:22:53.581 { 00:22:53.581 "name": "BaseBdev1", 00:22:53.581 "uuid": "e1584aae-590d-43b5-8e0e-17aa59d474fb", 00:22:53.581 "is_configured": true, 00:22:53.581 "data_offset": 2048, 00:22:53.581 "data_size": 63488 00:22:53.581 }, 00:22:53.581 { 00:22:53.581 "name": "BaseBdev2", 00:22:53.581 "uuid": "5650dcca-e5f7-4224-8847-adfcf7b2a7ff", 00:22:53.581 "is_configured": true, 00:22:53.581 "data_offset": 2048, 00:22:53.581 "data_size": 63488 00:22:53.581 }, 00:22:53.581 { 00:22:53.581 "name": "BaseBdev3", 00:22:53.581 "uuid": "c80c9149-e46d-4cbc-9934-3b6a3664fcc6", 00:22:53.581 "is_configured": true, 00:22:53.581 "data_offset": 2048, 00:22:53.581 "data_size": 63488 00:22:53.581 } 00:22:53.581 ] 00:22:53.581 } 00:22:53.581 } 00:22:53.581 }' 00:22:53.581 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:53.840 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:53.840 BaseBdev2 00:22:53.840 BaseBdev3' 00:22:53.840 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.840 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:53.840 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.099 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.099 "name": "BaseBdev1", 00:22:54.099 "aliases": [ 00:22:54.099 "e1584aae-590d-43b5-8e0e-17aa59d474fb" 00:22:54.099 ], 00:22:54.099 "product_name": "Malloc disk", 00:22:54.099 "block_size": 512, 00:22:54.099 "num_blocks": 65536, 00:22:54.099 "uuid": "e1584aae-590d-43b5-8e0e-17aa59d474fb", 00:22:54.099 "assigned_rate_limits": { 00:22:54.099 "rw_ios_per_sec": 0, 00:22:54.099 "rw_mbytes_per_sec": 0, 00:22:54.099 "r_mbytes_per_sec": 0, 00:22:54.099 "w_mbytes_per_sec": 0 00:22:54.099 }, 00:22:54.099 "claimed": true, 00:22:54.099 "claim_type": "exclusive_write", 00:22:54.099 "zoned": false, 00:22:54.099 "supported_io_types": { 00:22:54.099 "read": true, 00:22:54.099 "write": true, 00:22:54.099 "unmap": true, 00:22:54.099 "flush": true, 00:22:54.099 "reset": true, 00:22:54.099 "nvme_admin": false, 00:22:54.099 "nvme_io": false, 00:22:54.099 "nvme_io_md": false, 00:22:54.099 "write_zeroes": true, 00:22:54.099 "zcopy": true, 00:22:54.099 "get_zone_info": false, 00:22:54.099 "zone_management": false, 00:22:54.099 "zone_append": false, 00:22:54.099 "compare": false, 00:22:54.099 "compare_and_write": false, 00:22:54.099 "abort": true, 00:22:54.099 "seek_hole": false, 00:22:54.099 "seek_data": false, 00:22:54.099 "copy": true, 00:22:54.099 "nvme_iov_md": false 00:22:54.099 }, 00:22:54.099 "memory_domains": [ 00:22:54.099 { 00:22:54.099 "dma_device_id": "system", 00:22:54.099 "dma_device_type": 1 00:22:54.099 }, 00:22:54.099 { 00:22:54.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.099 "dma_device_type": 2 00:22:54.099 } 00:22:54.099 ], 00:22:54.099 "driver_specific": {} 00:22:54.099 }' 00:22:54.099 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.099 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.099 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.099 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.356 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.356 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.356 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.356 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.356 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.356 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.356 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.613 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.614 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.614 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:54.614 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.872 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.872 "name": "BaseBdev2", 00:22:54.872 "aliases": [ 00:22:54.872 "5650dcca-e5f7-4224-8847-adfcf7b2a7ff" 00:22:54.872 ], 00:22:54.872 "product_name": "Malloc disk", 00:22:54.872 "block_size": 512, 00:22:54.872 "num_blocks": 65536, 00:22:54.872 "uuid": "5650dcca-e5f7-4224-8847-adfcf7b2a7ff", 00:22:54.872 "assigned_rate_limits": { 00:22:54.872 "rw_ios_per_sec": 0, 00:22:54.872 "rw_mbytes_per_sec": 0, 00:22:54.872 "r_mbytes_per_sec": 0, 00:22:54.872 "w_mbytes_per_sec": 0 00:22:54.872 }, 00:22:54.872 "claimed": true, 00:22:54.872 "claim_type": "exclusive_write", 00:22:54.872 "zoned": false, 00:22:54.872 "supported_io_types": { 00:22:54.872 "read": true, 00:22:54.872 "write": true, 00:22:54.872 "unmap": true, 00:22:54.872 "flush": true, 00:22:54.872 "reset": true, 00:22:54.872 "nvme_admin": false, 00:22:54.872 "nvme_io": false, 00:22:54.872 "nvme_io_md": false, 00:22:54.872 "write_zeroes": true, 00:22:54.872 "zcopy": true, 00:22:54.872 "get_zone_info": false, 00:22:54.872 "zone_management": false, 00:22:54.872 "zone_append": false, 00:22:54.872 "compare": false, 00:22:54.872 "compare_and_write": false, 00:22:54.872 "abort": true, 00:22:54.872 "seek_hole": false, 00:22:54.872 "seek_data": false, 00:22:54.872 "copy": true, 00:22:54.872 "nvme_iov_md": false 00:22:54.872 }, 00:22:54.872 "memory_domains": [ 00:22:54.872 { 00:22:54.872 "dma_device_id": "system", 00:22:54.872 "dma_device_type": 1 00:22:54.872 }, 00:22:54.872 { 00:22:54.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.872 "dma_device_type": 2 00:22:54.872 } 00:22:54.872 ], 00:22:54.872 "driver_specific": {} 00:22:54.872 }' 00:22:54.872 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.872 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.872 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.872 08:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.872 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.130 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.130 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.130 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.130 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.130 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.130 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.388 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.388 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.388 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:55.388 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:55.388 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:55.388 "name": "BaseBdev3", 00:22:55.388 "aliases": [ 00:22:55.388 "c80c9149-e46d-4cbc-9934-3b6a3664fcc6" 00:22:55.388 ], 00:22:55.388 "product_name": "Malloc disk", 00:22:55.388 "block_size": 512, 00:22:55.388 "num_blocks": 65536, 00:22:55.388 "uuid": "c80c9149-e46d-4cbc-9934-3b6a3664fcc6", 00:22:55.388 "assigned_rate_limits": { 00:22:55.388 "rw_ios_per_sec": 0, 00:22:55.388 "rw_mbytes_per_sec": 0, 00:22:55.388 "r_mbytes_per_sec": 0, 00:22:55.388 "w_mbytes_per_sec": 0 00:22:55.388 }, 00:22:55.388 "claimed": true, 00:22:55.388 "claim_type": "exclusive_write", 00:22:55.388 "zoned": false, 00:22:55.388 "supported_io_types": { 00:22:55.388 "read": true, 00:22:55.388 "write": true, 00:22:55.388 "unmap": true, 00:22:55.388 "flush": true, 00:22:55.388 "reset": true, 00:22:55.388 "nvme_admin": false, 00:22:55.388 "nvme_io": false, 00:22:55.388 "nvme_io_md": false, 00:22:55.388 "write_zeroes": true, 00:22:55.388 "zcopy": true, 00:22:55.388 "get_zone_info": false, 00:22:55.388 "zone_management": false, 00:22:55.388 "zone_append": false, 00:22:55.388 "compare": false, 00:22:55.388 "compare_and_write": false, 00:22:55.388 "abort": true, 00:22:55.388 "seek_hole": false, 00:22:55.388 "seek_data": false, 00:22:55.388 "copy": true, 00:22:55.388 "nvme_iov_md": false 00:22:55.388 }, 00:22:55.388 "memory_domains": [ 00:22:55.388 { 00:22:55.388 "dma_device_id": "system", 00:22:55.388 "dma_device_type": 1 00:22:55.388 }, 00:22:55.388 { 00:22:55.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.388 "dma_device_type": 2 00:22:55.388 } 00:22:55.388 ], 00:22:55.388 "driver_specific": {} 00:22:55.388 }' 00:22:55.388 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.646 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.646 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.646 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.646 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.646 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.646 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.904 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.904 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.904 08:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.904 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.904 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.904 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:56.469 [2024-07-12 08:49:31.361837] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:56.469 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:56.469 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.470 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.728 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.728 "name": "Existed_Raid", 00:22:56.728 "uuid": "7b455932-df48-4209-912a-437efd40bee1", 00:22:56.728 "strip_size_kb": 0, 00:22:56.728 "state": "online", 00:22:56.728 "raid_level": "raid1", 00:22:56.728 "superblock": true, 00:22:56.728 "num_base_bdevs": 3, 00:22:56.728 "num_base_bdevs_discovered": 2, 00:22:56.728 "num_base_bdevs_operational": 2, 00:22:56.728 "base_bdevs_list": [ 00:22:56.728 { 00:22:56.728 "name": null, 00:22:56.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.728 "is_configured": false, 00:22:56.728 "data_offset": 2048, 00:22:56.728 "data_size": 63488 00:22:56.728 }, 00:22:56.728 { 00:22:56.728 "name": "BaseBdev2", 00:22:56.728 "uuid": "5650dcca-e5f7-4224-8847-adfcf7b2a7ff", 00:22:56.728 "is_configured": true, 00:22:56.728 "data_offset": 2048, 00:22:56.728 "data_size": 63488 00:22:56.728 }, 00:22:56.728 { 00:22:56.728 "name": "BaseBdev3", 00:22:56.728 "uuid": "c80c9149-e46d-4cbc-9934-3b6a3664fcc6", 00:22:56.728 "is_configured": true, 00:22:56.728 "data_offset": 2048, 00:22:56.728 "data_size": 63488 00:22:56.728 } 00:22:56.728 ] 00:22:56.728 }' 00:22:56.728 08:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.728 08:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.293 08:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:57.293 08:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:57.293 08:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.293 08:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:57.550 08:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:57.550 08:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:57.550 08:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:57.807 [2024-07-12 08:49:32.993626] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:58.064 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:58.064 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:58.064 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.064 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:58.321 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:58.321 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:58.321 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:58.579 [2024-07-12 08:49:33.638402] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:58.579 [2024-07-12 08:49:33.638566] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:58.579 [2024-07-12 08:49:33.721908] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.579 [2024-07-12 08:49:33.721963] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.579 [2024-07-12 08:49:33.721975] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:22:58.579 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:58.579 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:58.579 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.579 08:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:58.838 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:58.838 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:58.838 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:58.838 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:58.838 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:58.838 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:59.096 BaseBdev2 00:22:59.355 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:59.355 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:59.355 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:59.355 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:59.355 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:59.355 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:59.355 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:59.614 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:59.614 [ 00:22:59.614 { 00:22:59.614 "name": "BaseBdev2", 00:22:59.614 "aliases": [ 00:22:59.614 "e59d9bc7-0d5d-4801-8eff-c5752f846a78" 00:22:59.614 ], 00:22:59.614 "product_name": "Malloc disk", 00:22:59.614 "block_size": 512, 00:22:59.614 "num_blocks": 65536, 00:22:59.614 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:22:59.614 "assigned_rate_limits": { 00:22:59.614 "rw_ios_per_sec": 0, 00:22:59.614 "rw_mbytes_per_sec": 0, 00:22:59.614 "r_mbytes_per_sec": 0, 00:22:59.614 "w_mbytes_per_sec": 0 00:22:59.614 }, 00:22:59.614 "claimed": false, 00:22:59.614 "zoned": false, 00:22:59.614 "supported_io_types": { 00:22:59.614 "read": true, 00:22:59.614 "write": true, 00:22:59.614 "unmap": true, 00:22:59.614 "flush": true, 00:22:59.614 "reset": true, 00:22:59.614 "nvme_admin": false, 00:22:59.614 "nvme_io": false, 00:22:59.614 "nvme_io_md": false, 00:22:59.614 "write_zeroes": true, 00:22:59.614 "zcopy": true, 00:22:59.614 "get_zone_info": false, 00:22:59.614 "zone_management": false, 00:22:59.614 "zone_append": false, 00:22:59.614 "compare": false, 00:22:59.614 "compare_and_write": false, 00:22:59.614 "abort": true, 00:22:59.614 "seek_hole": false, 00:22:59.614 "seek_data": false, 00:22:59.614 "copy": true, 00:22:59.614 "nvme_iov_md": false 00:22:59.614 }, 00:22:59.614 "memory_domains": [ 00:22:59.614 { 00:22:59.614 "dma_device_id": "system", 00:22:59.614 "dma_device_type": 1 00:22:59.614 }, 00:22:59.614 { 00:22:59.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.614 "dma_device_type": 2 00:22:59.614 } 00:22:59.614 ], 00:22:59.614 "driver_specific": {} 00:22:59.614 } 00:22:59.614 ] 00:22:59.873 08:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:59.873 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:59.873 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:59.873 08:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:59.873 BaseBdev3 00:22:59.873 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:59.873 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:59.873 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:59.873 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:59.873 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:59.873 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:59.873 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.131 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:00.390 [ 00:23:00.390 { 00:23:00.390 "name": "BaseBdev3", 00:23:00.390 "aliases": [ 00:23:00.390 "f9ed6493-a3d4-45c3-99fe-a11873080c7a" 00:23:00.390 ], 00:23:00.390 "product_name": "Malloc disk", 00:23:00.390 "block_size": 512, 00:23:00.390 "num_blocks": 65536, 00:23:00.390 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:00.390 "assigned_rate_limits": { 00:23:00.390 "rw_ios_per_sec": 0, 00:23:00.390 "rw_mbytes_per_sec": 0, 00:23:00.390 "r_mbytes_per_sec": 0, 00:23:00.390 "w_mbytes_per_sec": 0 00:23:00.390 }, 00:23:00.390 "claimed": false, 00:23:00.390 "zoned": false, 00:23:00.390 "supported_io_types": { 00:23:00.390 "read": true, 00:23:00.390 "write": true, 00:23:00.390 "unmap": true, 00:23:00.390 "flush": true, 00:23:00.390 "reset": true, 00:23:00.390 "nvme_admin": false, 00:23:00.390 "nvme_io": false, 00:23:00.390 "nvme_io_md": false, 00:23:00.390 "write_zeroes": true, 00:23:00.390 "zcopy": true, 00:23:00.390 "get_zone_info": false, 00:23:00.390 "zone_management": false, 00:23:00.390 "zone_append": false, 00:23:00.390 "compare": false, 00:23:00.390 "compare_and_write": false, 00:23:00.390 "abort": true, 00:23:00.390 "seek_hole": false, 00:23:00.390 "seek_data": false, 00:23:00.390 "copy": true, 00:23:00.390 "nvme_iov_md": false 00:23:00.390 }, 00:23:00.390 "memory_domains": [ 00:23:00.390 { 00:23:00.390 "dma_device_id": "system", 00:23:00.390 "dma_device_type": 1 00:23:00.390 }, 00:23:00.390 { 00:23:00.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.390 "dma_device_type": 2 00:23:00.390 } 00:23:00.390 ], 00:23:00.390 "driver_specific": {} 00:23:00.390 } 00:23:00.390 ] 00:23:00.390 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:00.390 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:00.390 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:00.390 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:00.648 [2024-07-12 08:49:35.646117] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:00.649 [2024-07-12 08:49:35.646176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:00.649 [2024-07-12 08:49:35.646198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.649 [2024-07-12 08:49:35.647878] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.649 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.908 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.908 "name": "Existed_Raid", 00:23:00.908 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:00.908 "strip_size_kb": 0, 00:23:00.908 "state": "configuring", 00:23:00.908 "raid_level": "raid1", 00:23:00.908 "superblock": true, 00:23:00.908 "num_base_bdevs": 3, 00:23:00.908 "num_base_bdevs_discovered": 2, 00:23:00.908 "num_base_bdevs_operational": 3, 00:23:00.908 "base_bdevs_list": [ 00:23:00.908 { 00:23:00.908 "name": "BaseBdev1", 00:23:00.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.908 "is_configured": false, 00:23:00.908 "data_offset": 0, 00:23:00.908 "data_size": 0 00:23:00.908 }, 00:23:00.908 { 00:23:00.908 "name": "BaseBdev2", 00:23:00.908 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:00.908 "is_configured": true, 00:23:00.908 "data_offset": 2048, 00:23:00.908 "data_size": 63488 00:23:00.908 }, 00:23:00.908 { 00:23:00.908 "name": "BaseBdev3", 00:23:00.908 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:00.908 "is_configured": true, 00:23:00.908 "data_offset": 2048, 00:23:00.908 "data_size": 63488 00:23:00.908 } 00:23:00.908 ] 00:23:00.908 }' 00:23:00.908 08:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.908 08:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.474 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:01.739 [2024-07-12 08:49:36.676561] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:01.739 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:01.739 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:01.739 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:01.739 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.739 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.740 "name": "Existed_Raid", 00:23:01.740 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:01.740 "strip_size_kb": 0, 00:23:01.740 "state": "configuring", 00:23:01.740 "raid_level": "raid1", 00:23:01.740 "superblock": true, 00:23:01.740 "num_base_bdevs": 3, 00:23:01.740 "num_base_bdevs_discovered": 1, 00:23:01.740 "num_base_bdevs_operational": 3, 00:23:01.740 "base_bdevs_list": [ 00:23:01.740 { 00:23:01.740 "name": "BaseBdev1", 00:23:01.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.740 "is_configured": false, 00:23:01.740 "data_offset": 0, 00:23:01.740 "data_size": 0 00:23:01.740 }, 00:23:01.740 { 00:23:01.740 "name": null, 00:23:01.740 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:01.740 "is_configured": false, 00:23:01.740 "data_offset": 2048, 00:23:01.740 "data_size": 63488 00:23:01.740 }, 00:23:01.740 { 00:23:01.740 "name": "BaseBdev3", 00:23:01.740 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:01.740 "is_configured": true, 00:23:01.740 "data_offset": 2048, 00:23:01.740 "data_size": 63488 00:23:01.740 } 00:23:01.740 ] 00:23:01.740 }' 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.740 08:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.706 08:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.706 08:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:02.706 08:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:02.706 08:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:02.965 [2024-07-12 08:49:37.992748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.965 BaseBdev1 00:23:02.965 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:02.965 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:02.965 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:02.965 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:02.965 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:02.965 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:02.965 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:03.223 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:03.482 [ 00:23:03.482 { 00:23:03.482 "name": "BaseBdev1", 00:23:03.482 "aliases": [ 00:23:03.482 "af43c946-8b83-4947-be97-15ddb67f7661" 00:23:03.482 ], 00:23:03.482 "product_name": "Malloc disk", 00:23:03.482 "block_size": 512, 00:23:03.482 "num_blocks": 65536, 00:23:03.482 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:03.482 "assigned_rate_limits": { 00:23:03.482 "rw_ios_per_sec": 0, 00:23:03.482 "rw_mbytes_per_sec": 0, 00:23:03.482 "r_mbytes_per_sec": 0, 00:23:03.482 "w_mbytes_per_sec": 0 00:23:03.482 }, 00:23:03.482 "claimed": true, 00:23:03.482 "claim_type": "exclusive_write", 00:23:03.482 "zoned": false, 00:23:03.482 "supported_io_types": { 00:23:03.482 "read": true, 00:23:03.482 "write": true, 00:23:03.482 "unmap": true, 00:23:03.482 "flush": true, 00:23:03.482 "reset": true, 00:23:03.482 "nvme_admin": false, 00:23:03.482 "nvme_io": false, 00:23:03.482 "nvme_io_md": false, 00:23:03.482 "write_zeroes": true, 00:23:03.482 "zcopy": true, 00:23:03.482 "get_zone_info": false, 00:23:03.482 "zone_management": false, 00:23:03.482 "zone_append": false, 00:23:03.482 "compare": false, 00:23:03.482 "compare_and_write": false, 00:23:03.482 "abort": true, 00:23:03.482 "seek_hole": false, 00:23:03.482 "seek_data": false, 00:23:03.482 "copy": true, 00:23:03.482 "nvme_iov_md": false 00:23:03.482 }, 00:23:03.482 "memory_domains": [ 00:23:03.482 { 00:23:03.482 "dma_device_id": "system", 00:23:03.482 "dma_device_type": 1 00:23:03.482 }, 00:23:03.482 { 00:23:03.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.482 "dma_device_type": 2 00:23:03.482 } 00:23:03.482 ], 00:23:03.482 "driver_specific": {} 00:23:03.482 } 00:23:03.482 ] 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.482 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.740 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.740 "name": "Existed_Raid", 00:23:03.740 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:03.740 "strip_size_kb": 0, 00:23:03.740 "state": "configuring", 00:23:03.740 "raid_level": "raid1", 00:23:03.740 "superblock": true, 00:23:03.740 "num_base_bdevs": 3, 00:23:03.740 "num_base_bdevs_discovered": 2, 00:23:03.740 "num_base_bdevs_operational": 3, 00:23:03.740 "base_bdevs_list": [ 00:23:03.740 { 00:23:03.740 "name": "BaseBdev1", 00:23:03.740 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:03.740 "is_configured": true, 00:23:03.740 "data_offset": 2048, 00:23:03.740 "data_size": 63488 00:23:03.740 }, 00:23:03.740 { 00:23:03.740 "name": null, 00:23:03.740 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:03.740 "is_configured": false, 00:23:03.740 "data_offset": 2048, 00:23:03.740 "data_size": 63488 00:23:03.740 }, 00:23:03.740 { 00:23:03.740 "name": "BaseBdev3", 00:23:03.740 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:03.740 "is_configured": true, 00:23:03.740 "data_offset": 2048, 00:23:03.740 "data_size": 63488 00:23:03.740 } 00:23:03.740 ] 00:23:03.740 }' 00:23:03.740 08:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.740 08:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.309 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.309 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:04.568 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:04.568 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:04.826 [2024-07-12 08:49:39.857133] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.826 08:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.085 08:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.085 "name": "Existed_Raid", 00:23:05.085 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:05.085 "strip_size_kb": 0, 00:23:05.085 "state": "configuring", 00:23:05.085 "raid_level": "raid1", 00:23:05.085 "superblock": true, 00:23:05.085 "num_base_bdevs": 3, 00:23:05.085 "num_base_bdevs_discovered": 1, 00:23:05.085 "num_base_bdevs_operational": 3, 00:23:05.085 "base_bdevs_list": [ 00:23:05.085 { 00:23:05.085 "name": "BaseBdev1", 00:23:05.085 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:05.085 "is_configured": true, 00:23:05.085 "data_offset": 2048, 00:23:05.085 "data_size": 63488 00:23:05.085 }, 00:23:05.085 { 00:23:05.085 "name": null, 00:23:05.085 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:05.085 "is_configured": false, 00:23:05.085 "data_offset": 2048, 00:23:05.085 "data_size": 63488 00:23:05.085 }, 00:23:05.085 { 00:23:05.085 "name": null, 00:23:05.085 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:05.085 "is_configured": false, 00:23:05.085 "data_offset": 2048, 00:23:05.085 "data_size": 63488 00:23:05.085 } 00:23:05.085 ] 00:23:05.085 }' 00:23:05.085 08:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.085 08:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.653 08:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.653 08:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:06.220 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:06.221 [2024-07-12 08:49:41.361682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.221 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.480 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.480 "name": "Existed_Raid", 00:23:06.480 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:06.480 "strip_size_kb": 0, 00:23:06.480 "state": "configuring", 00:23:06.480 "raid_level": "raid1", 00:23:06.480 "superblock": true, 00:23:06.480 "num_base_bdevs": 3, 00:23:06.480 "num_base_bdevs_discovered": 2, 00:23:06.480 "num_base_bdevs_operational": 3, 00:23:06.480 "base_bdevs_list": [ 00:23:06.480 { 00:23:06.480 "name": "BaseBdev1", 00:23:06.480 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:06.480 "is_configured": true, 00:23:06.480 "data_offset": 2048, 00:23:06.480 "data_size": 63488 00:23:06.480 }, 00:23:06.480 { 00:23:06.480 "name": null, 00:23:06.480 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:06.480 "is_configured": false, 00:23:06.480 "data_offset": 2048, 00:23:06.480 "data_size": 63488 00:23:06.480 }, 00:23:06.480 { 00:23:06.480 "name": "BaseBdev3", 00:23:06.480 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:06.480 "is_configured": true, 00:23:06.480 "data_offset": 2048, 00:23:06.480 "data_size": 63488 00:23:06.480 } 00:23:06.480 ] 00:23:06.480 }' 00:23:06.480 08:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.480 08:49:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.416 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.416 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:07.416 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:07.416 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:07.675 [2024-07-12 08:49:42.761969] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.675 08:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.934 08:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.934 "name": "Existed_Raid", 00:23:07.934 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:07.934 "strip_size_kb": 0, 00:23:07.934 "state": "configuring", 00:23:07.934 "raid_level": "raid1", 00:23:07.934 "superblock": true, 00:23:07.934 "num_base_bdevs": 3, 00:23:07.934 "num_base_bdevs_discovered": 1, 00:23:07.934 "num_base_bdevs_operational": 3, 00:23:07.934 "base_bdevs_list": [ 00:23:07.934 { 00:23:07.934 "name": null, 00:23:07.934 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:07.934 "is_configured": false, 00:23:07.934 "data_offset": 2048, 00:23:07.934 "data_size": 63488 00:23:07.934 }, 00:23:07.934 { 00:23:07.934 "name": null, 00:23:07.934 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:07.934 "is_configured": false, 00:23:07.934 "data_offset": 2048, 00:23:07.934 "data_size": 63488 00:23:07.934 }, 00:23:07.934 { 00:23:07.934 "name": "BaseBdev3", 00:23:07.934 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:07.934 "is_configured": true, 00:23:07.934 "data_offset": 2048, 00:23:07.934 "data_size": 63488 00:23:07.934 } 00:23:07.934 ] 00:23:07.934 }' 00:23:07.934 08:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.934 08:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.868 08:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.868 08:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:08.868 08:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:08.868 08:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:09.126 [2024-07-12 08:49:44.184168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.126 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.383 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.383 "name": "Existed_Raid", 00:23:09.383 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:09.383 "strip_size_kb": 0, 00:23:09.383 "state": "configuring", 00:23:09.383 "raid_level": "raid1", 00:23:09.383 "superblock": true, 00:23:09.383 "num_base_bdevs": 3, 00:23:09.383 "num_base_bdevs_discovered": 2, 00:23:09.383 "num_base_bdevs_operational": 3, 00:23:09.383 "base_bdevs_list": [ 00:23:09.383 { 00:23:09.383 "name": null, 00:23:09.383 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:09.383 "is_configured": false, 00:23:09.383 "data_offset": 2048, 00:23:09.383 "data_size": 63488 00:23:09.383 }, 00:23:09.383 { 00:23:09.383 "name": "BaseBdev2", 00:23:09.383 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:09.383 "is_configured": true, 00:23:09.383 "data_offset": 2048, 00:23:09.383 "data_size": 63488 00:23:09.383 }, 00:23:09.383 { 00:23:09.383 "name": "BaseBdev3", 00:23:09.383 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:09.383 "is_configured": true, 00:23:09.383 "data_offset": 2048, 00:23:09.383 "data_size": 63488 00:23:09.383 } 00:23:09.383 ] 00:23:09.383 }' 00:23:09.383 08:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.383 08:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.003 08:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.003 08:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:10.259 08:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:10.259 08:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.259 08:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:10.516 08:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u af43c946-8b83-4947-be97-15ddb67f7661 00:23:10.774 [2024-07-12 08:49:45.875831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:10.774 [2024-07-12 08:49:45.876130] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:23:10.774 [2024-07-12 08:49:45.876147] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:10.774 [2024-07-12 08:49:45.876295] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:10.774 NewBaseBdev 00:23:10.774 [2024-07-12 08:49:45.876670] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:23:10.774 [2024-07-12 08:49:45.876698] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:23:10.774 [2024-07-12 08:49:45.876852] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.774 08:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:10.774 08:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:10.774 08:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:10.774 08:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:10.774 08:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:10.774 08:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:10.774 08:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:11.031 08:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:11.290 [ 00:23:11.290 { 00:23:11.290 "name": "NewBaseBdev", 00:23:11.290 "aliases": [ 00:23:11.290 "af43c946-8b83-4947-be97-15ddb67f7661" 00:23:11.290 ], 00:23:11.290 "product_name": "Malloc disk", 00:23:11.290 "block_size": 512, 00:23:11.290 "num_blocks": 65536, 00:23:11.290 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:11.290 "assigned_rate_limits": { 00:23:11.290 "rw_ios_per_sec": 0, 00:23:11.290 "rw_mbytes_per_sec": 0, 00:23:11.290 "r_mbytes_per_sec": 0, 00:23:11.290 "w_mbytes_per_sec": 0 00:23:11.290 }, 00:23:11.290 "claimed": true, 00:23:11.290 "claim_type": "exclusive_write", 00:23:11.290 "zoned": false, 00:23:11.290 "supported_io_types": { 00:23:11.290 "read": true, 00:23:11.290 "write": true, 00:23:11.290 "unmap": true, 00:23:11.290 "flush": true, 00:23:11.290 "reset": true, 00:23:11.290 "nvme_admin": false, 00:23:11.290 "nvme_io": false, 00:23:11.290 "nvme_io_md": false, 00:23:11.290 "write_zeroes": true, 00:23:11.290 "zcopy": true, 00:23:11.290 "get_zone_info": false, 00:23:11.290 "zone_management": false, 00:23:11.290 "zone_append": false, 00:23:11.290 "compare": false, 00:23:11.290 "compare_and_write": false, 00:23:11.290 "abort": true, 00:23:11.290 "seek_hole": false, 00:23:11.290 "seek_data": false, 00:23:11.290 "copy": true, 00:23:11.290 "nvme_iov_md": false 00:23:11.290 }, 00:23:11.290 "memory_domains": [ 00:23:11.290 { 00:23:11.290 "dma_device_id": "system", 00:23:11.290 "dma_device_type": 1 00:23:11.290 }, 00:23:11.290 { 00:23:11.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.290 "dma_device_type": 2 00:23:11.290 } 00:23:11.290 ], 00:23:11.290 "driver_specific": {} 00:23:11.290 } 00:23:11.290 ] 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.290 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.548 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.548 "name": "Existed_Raid", 00:23:11.548 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:11.548 "strip_size_kb": 0, 00:23:11.548 "state": "online", 00:23:11.548 "raid_level": "raid1", 00:23:11.548 "superblock": true, 00:23:11.548 "num_base_bdevs": 3, 00:23:11.548 "num_base_bdevs_discovered": 3, 00:23:11.548 "num_base_bdevs_operational": 3, 00:23:11.548 "base_bdevs_list": [ 00:23:11.548 { 00:23:11.549 "name": "NewBaseBdev", 00:23:11.549 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:11.549 "is_configured": true, 00:23:11.549 "data_offset": 2048, 00:23:11.549 "data_size": 63488 00:23:11.549 }, 00:23:11.549 { 00:23:11.549 "name": "BaseBdev2", 00:23:11.549 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:11.549 "is_configured": true, 00:23:11.549 "data_offset": 2048, 00:23:11.549 "data_size": 63488 00:23:11.549 }, 00:23:11.549 { 00:23:11.549 "name": "BaseBdev3", 00:23:11.549 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:11.549 "is_configured": true, 00:23:11.549 "data_offset": 2048, 00:23:11.549 "data_size": 63488 00:23:11.549 } 00:23:11.549 ] 00:23:11.549 }' 00:23:11.549 08:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.549 08:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:12.485 [2024-07-12 08:49:47.584742] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.485 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:12.485 "name": "Existed_Raid", 00:23:12.485 "aliases": [ 00:23:12.485 "9df46af6-07e7-493d-b6c1-5d3a0a494de6" 00:23:12.485 ], 00:23:12.485 "product_name": "Raid Volume", 00:23:12.485 "block_size": 512, 00:23:12.485 "num_blocks": 63488, 00:23:12.485 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:12.485 "assigned_rate_limits": { 00:23:12.485 "rw_ios_per_sec": 0, 00:23:12.485 "rw_mbytes_per_sec": 0, 00:23:12.485 "r_mbytes_per_sec": 0, 00:23:12.485 "w_mbytes_per_sec": 0 00:23:12.485 }, 00:23:12.485 "claimed": false, 00:23:12.485 "zoned": false, 00:23:12.486 "supported_io_types": { 00:23:12.486 "read": true, 00:23:12.486 "write": true, 00:23:12.486 "unmap": false, 00:23:12.486 "flush": false, 00:23:12.486 "reset": true, 00:23:12.486 "nvme_admin": false, 00:23:12.486 "nvme_io": false, 00:23:12.486 "nvme_io_md": false, 00:23:12.486 "write_zeroes": true, 00:23:12.486 "zcopy": false, 00:23:12.486 "get_zone_info": false, 00:23:12.486 "zone_management": false, 00:23:12.486 "zone_append": false, 00:23:12.486 "compare": false, 00:23:12.486 "compare_and_write": false, 00:23:12.486 "abort": false, 00:23:12.486 "seek_hole": false, 00:23:12.486 "seek_data": false, 00:23:12.486 "copy": false, 00:23:12.486 "nvme_iov_md": false 00:23:12.486 }, 00:23:12.486 "memory_domains": [ 00:23:12.486 { 00:23:12.486 "dma_device_id": "system", 00:23:12.486 "dma_device_type": 1 00:23:12.486 }, 00:23:12.486 { 00:23:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.486 "dma_device_type": 2 00:23:12.486 }, 00:23:12.486 { 00:23:12.486 "dma_device_id": "system", 00:23:12.486 "dma_device_type": 1 00:23:12.486 }, 00:23:12.486 { 00:23:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.486 "dma_device_type": 2 00:23:12.486 }, 00:23:12.486 { 00:23:12.486 "dma_device_id": "system", 00:23:12.486 "dma_device_type": 1 00:23:12.486 }, 00:23:12.486 { 00:23:12.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.486 "dma_device_type": 2 00:23:12.486 } 00:23:12.486 ], 00:23:12.486 "driver_specific": { 00:23:12.486 "raid": { 00:23:12.486 "uuid": "9df46af6-07e7-493d-b6c1-5d3a0a494de6", 00:23:12.486 "strip_size_kb": 0, 00:23:12.486 "state": "online", 00:23:12.486 "raid_level": "raid1", 00:23:12.486 "superblock": true, 00:23:12.486 "num_base_bdevs": 3, 00:23:12.486 "num_base_bdevs_discovered": 3, 00:23:12.486 "num_base_bdevs_operational": 3, 00:23:12.486 "base_bdevs_list": [ 00:23:12.486 { 00:23:12.486 "name": "NewBaseBdev", 00:23:12.486 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:12.486 "is_configured": true, 00:23:12.486 "data_offset": 2048, 00:23:12.486 "data_size": 63488 00:23:12.486 }, 00:23:12.486 { 00:23:12.486 "name": "BaseBdev2", 00:23:12.486 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:12.486 "is_configured": true, 00:23:12.486 "data_offset": 2048, 00:23:12.486 "data_size": 63488 00:23:12.486 }, 00:23:12.486 { 00:23:12.486 "name": "BaseBdev3", 00:23:12.486 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:12.486 "is_configured": true, 00:23:12.486 "data_offset": 2048, 00:23:12.486 "data_size": 63488 00:23:12.486 } 00:23:12.486 ] 00:23:12.486 } 00:23:12.486 } 00:23:12.486 }' 00:23:12.486 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:12.486 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:12.486 BaseBdev2 00:23:12.486 BaseBdev3' 00:23:12.486 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.486 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.486 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:12.744 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.744 "name": "NewBaseBdev", 00:23:12.744 "aliases": [ 00:23:12.744 "af43c946-8b83-4947-be97-15ddb67f7661" 00:23:12.744 ], 00:23:12.744 "product_name": "Malloc disk", 00:23:12.744 "block_size": 512, 00:23:12.744 "num_blocks": 65536, 00:23:12.744 "uuid": "af43c946-8b83-4947-be97-15ddb67f7661", 00:23:12.744 "assigned_rate_limits": { 00:23:12.744 "rw_ios_per_sec": 0, 00:23:12.744 "rw_mbytes_per_sec": 0, 00:23:12.744 "r_mbytes_per_sec": 0, 00:23:12.744 "w_mbytes_per_sec": 0 00:23:12.744 }, 00:23:12.744 "claimed": true, 00:23:12.744 "claim_type": "exclusive_write", 00:23:12.744 "zoned": false, 00:23:12.744 "supported_io_types": { 00:23:12.744 "read": true, 00:23:12.744 "write": true, 00:23:12.744 "unmap": true, 00:23:12.744 "flush": true, 00:23:12.744 "reset": true, 00:23:12.744 "nvme_admin": false, 00:23:12.744 "nvme_io": false, 00:23:12.744 "nvme_io_md": false, 00:23:12.744 "write_zeroes": true, 00:23:12.744 "zcopy": true, 00:23:12.744 "get_zone_info": false, 00:23:12.744 "zone_management": false, 00:23:12.744 "zone_append": false, 00:23:12.744 "compare": false, 00:23:12.744 "compare_and_write": false, 00:23:12.744 "abort": true, 00:23:12.744 "seek_hole": false, 00:23:12.744 "seek_data": false, 00:23:12.744 "copy": true, 00:23:12.744 "nvme_iov_md": false 00:23:12.744 }, 00:23:12.744 "memory_domains": [ 00:23:12.744 { 00:23:12.744 "dma_device_id": "system", 00:23:12.744 "dma_device_type": 1 00:23:12.744 }, 00:23:12.744 { 00:23:12.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.744 "dma_device_type": 2 00:23:12.744 } 00:23:12.744 ], 00:23:12.744 "driver_specific": {} 00:23:12.744 }' 00:23:12.744 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.003 08:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.003 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:13.003 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.003 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.003 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:13.003 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.003 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.262 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:13.262 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.262 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.262 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:13.262 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:13.262 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:13.262 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:13.523 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:13.523 "name": "BaseBdev2", 00:23:13.523 "aliases": [ 00:23:13.523 "e59d9bc7-0d5d-4801-8eff-c5752f846a78" 00:23:13.523 ], 00:23:13.523 "product_name": "Malloc disk", 00:23:13.523 "block_size": 512, 00:23:13.523 "num_blocks": 65536, 00:23:13.523 "uuid": "e59d9bc7-0d5d-4801-8eff-c5752f846a78", 00:23:13.523 "assigned_rate_limits": { 00:23:13.523 "rw_ios_per_sec": 0, 00:23:13.523 "rw_mbytes_per_sec": 0, 00:23:13.523 "r_mbytes_per_sec": 0, 00:23:13.523 "w_mbytes_per_sec": 0 00:23:13.523 }, 00:23:13.523 "claimed": true, 00:23:13.523 "claim_type": "exclusive_write", 00:23:13.523 "zoned": false, 00:23:13.523 "supported_io_types": { 00:23:13.523 "read": true, 00:23:13.523 "write": true, 00:23:13.523 "unmap": true, 00:23:13.523 "flush": true, 00:23:13.523 "reset": true, 00:23:13.523 "nvme_admin": false, 00:23:13.523 "nvme_io": false, 00:23:13.523 "nvme_io_md": false, 00:23:13.523 "write_zeroes": true, 00:23:13.523 "zcopy": true, 00:23:13.523 "get_zone_info": false, 00:23:13.523 "zone_management": false, 00:23:13.523 "zone_append": false, 00:23:13.523 "compare": false, 00:23:13.523 "compare_and_write": false, 00:23:13.523 "abort": true, 00:23:13.523 "seek_hole": false, 00:23:13.523 "seek_data": false, 00:23:13.523 "copy": true, 00:23:13.523 "nvme_iov_md": false 00:23:13.523 }, 00:23:13.523 "memory_domains": [ 00:23:13.523 { 00:23:13.523 "dma_device_id": "system", 00:23:13.523 "dma_device_type": 1 00:23:13.523 }, 00:23:13.523 { 00:23:13.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.523 "dma_device_type": 2 00:23:13.523 } 00:23:13.523 ], 00:23:13.523 "driver_specific": {} 00:23:13.523 }' 00:23:13.523 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.523 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.523 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:13.523 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.783 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.783 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:13.783 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.783 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.783 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:13.783 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.040 08:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.040 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:14.040 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:14.040 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:14.040 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:14.298 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:14.298 "name": "BaseBdev3", 00:23:14.298 "aliases": [ 00:23:14.298 "f9ed6493-a3d4-45c3-99fe-a11873080c7a" 00:23:14.298 ], 00:23:14.298 "product_name": "Malloc disk", 00:23:14.298 "block_size": 512, 00:23:14.298 "num_blocks": 65536, 00:23:14.298 "uuid": "f9ed6493-a3d4-45c3-99fe-a11873080c7a", 00:23:14.298 "assigned_rate_limits": { 00:23:14.298 "rw_ios_per_sec": 0, 00:23:14.298 "rw_mbytes_per_sec": 0, 00:23:14.298 "r_mbytes_per_sec": 0, 00:23:14.298 "w_mbytes_per_sec": 0 00:23:14.298 }, 00:23:14.298 "claimed": true, 00:23:14.298 "claim_type": "exclusive_write", 00:23:14.298 "zoned": false, 00:23:14.298 "supported_io_types": { 00:23:14.298 "read": true, 00:23:14.298 "write": true, 00:23:14.298 "unmap": true, 00:23:14.298 "flush": true, 00:23:14.298 "reset": true, 00:23:14.298 "nvme_admin": false, 00:23:14.298 "nvme_io": false, 00:23:14.298 "nvme_io_md": false, 00:23:14.298 "write_zeroes": true, 00:23:14.298 "zcopy": true, 00:23:14.298 "get_zone_info": false, 00:23:14.298 "zone_management": false, 00:23:14.298 "zone_append": false, 00:23:14.298 "compare": false, 00:23:14.298 "compare_and_write": false, 00:23:14.298 "abort": true, 00:23:14.298 "seek_hole": false, 00:23:14.298 "seek_data": false, 00:23:14.298 "copy": true, 00:23:14.298 "nvme_iov_md": false 00:23:14.298 }, 00:23:14.298 "memory_domains": [ 00:23:14.298 { 00:23:14.298 "dma_device_id": "system", 00:23:14.298 "dma_device_type": 1 00:23:14.298 }, 00:23:14.298 { 00:23:14.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.298 "dma_device_type": 2 00:23:14.298 } 00:23:14.298 ], 00:23:14.298 "driver_specific": {} 00:23:14.298 }' 00:23:14.298 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.298 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.298 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:14.298 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.555 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.555 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:14.555 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.555 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.555 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:14.555 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.555 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.813 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:14.813 08:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:14.813 [2024-07-12 08:49:50.001063] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:14.813 [2024-07-12 08:49:50.001109] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.813 [2024-07-12 08:49:50.001211] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.813 [2024-07-12 08:49:50.001586] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.813 [2024-07-12 08:49:50.001628] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 133333 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 133333 ']' 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 133333 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133333 00:23:15.071 killing process with pid 133333 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133333' 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 133333 00:23:15.071 08:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 133333 00:23:15.071 [2024-07-12 08:49:50.036917] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:15.329 [2024-07-12 08:49:50.299545] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:16.265 ************************************ 00:23:16.265 END TEST raid_state_function_test_sb 00:23:16.265 ************************************ 00:23:16.265 08:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:16.265 00:23:16.265 real 0m32.706s 00:23:16.265 user 1m1.347s 00:23:16.265 sys 0m3.493s 00:23:16.265 08:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.265 08:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.265 08:49:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:16.265 08:49:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:23:16.265 08:49:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:16.265 08:49:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.265 08:49:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.265 ************************************ 00:23:16.265 START TEST raid_superblock_test 00:23:16.265 ************************************ 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=134406 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 134406 /var/tmp/spdk-raid.sock 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 134406 ']' 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:16.265 08:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.266 08:49:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.524 [2024-07-12 08:49:51.481525] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:23:16.524 [2024-07-12 08:49:51.481733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134406 ] 00:23:16.524 [2024-07-12 08:49:51.639869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.783 [2024-07-12 08:49:51.915244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.042 [2024-07-12 08:49:52.101827] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:17.301 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:17.559 malloc1 00:23:17.559 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:17.818 [2024-07-12 08:49:52.953470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:17.818 [2024-07-12 08:49:52.953613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.818 [2024-07-12 08:49:52.953648] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:17.818 [2024-07-12 08:49:52.953667] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.818 [2024-07-12 08:49:52.956038] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.818 [2024-07-12 08:49:52.956141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:17.818 pt1 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:17.818 08:49:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:18.077 malloc2 00:23:18.077 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:18.335 [2024-07-12 08:49:53.434753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:18.335 [2024-07-12 08:49:53.434900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.335 [2024-07-12 08:49:53.434939] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:23:18.335 [2024-07-12 08:49:53.434960] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.335 [2024-07-12 08:49:53.437346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.335 [2024-07-12 08:49:53.437412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:18.335 pt2 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:18.335 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:18.593 malloc3 00:23:18.594 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:18.852 [2024-07-12 08:49:53.962875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:18.852 [2024-07-12 08:49:53.963015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.852 [2024-07-12 08:49:53.963050] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:23:18.852 [2024-07-12 08:49:53.963108] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.852 [2024-07-12 08:49:53.965568] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.852 [2024-07-12 08:49:53.965646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:18.852 pt3 00:23:18.852 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:18.852 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:18.852 08:49:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:19.111 [2024-07-12 08:49:54.166967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:19.111 [2024-07-12 08:49:54.169351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.111 [2024-07-12 08:49:54.169450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:19.111 [2024-07-12 08:49:54.169673] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:23:19.111 [2024-07-12 08:49:54.169688] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:19.111 [2024-07-12 08:49:54.169821] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:19.111 [2024-07-12 08:49:54.170233] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:23:19.111 [2024-07-12 08:49:54.170256] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:23:19.111 [2024-07-12 08:49:54.170433] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.111 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.370 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.370 "name": "raid_bdev1", 00:23:19.370 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:19.370 "strip_size_kb": 0, 00:23:19.370 "state": "online", 00:23:19.370 "raid_level": "raid1", 00:23:19.370 "superblock": true, 00:23:19.370 "num_base_bdevs": 3, 00:23:19.370 "num_base_bdevs_discovered": 3, 00:23:19.370 "num_base_bdevs_operational": 3, 00:23:19.370 "base_bdevs_list": [ 00:23:19.370 { 00:23:19.370 "name": "pt1", 00:23:19.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.370 "is_configured": true, 00:23:19.370 "data_offset": 2048, 00:23:19.370 "data_size": 63488 00:23:19.370 }, 00:23:19.370 { 00:23:19.370 "name": "pt2", 00:23:19.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.370 "is_configured": true, 00:23:19.370 "data_offset": 2048, 00:23:19.370 "data_size": 63488 00:23:19.370 }, 00:23:19.370 { 00:23:19.370 "name": "pt3", 00:23:19.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:19.370 "is_configured": true, 00:23:19.370 "data_offset": 2048, 00:23:19.370 "data_size": 63488 00:23:19.370 } 00:23:19.370 ] 00:23:19.370 }' 00:23:19.370 08:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.370 08:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:19.937 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:20.196 [2024-07-12 08:49:55.295510] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.196 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:20.196 "name": "raid_bdev1", 00:23:20.196 "aliases": [ 00:23:20.196 "8af9cd96-9055-4e9a-a66e-bb8e75682e2e" 00:23:20.196 ], 00:23:20.196 "product_name": "Raid Volume", 00:23:20.196 "block_size": 512, 00:23:20.196 "num_blocks": 63488, 00:23:20.196 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:20.196 "assigned_rate_limits": { 00:23:20.196 "rw_ios_per_sec": 0, 00:23:20.196 "rw_mbytes_per_sec": 0, 00:23:20.196 "r_mbytes_per_sec": 0, 00:23:20.196 "w_mbytes_per_sec": 0 00:23:20.196 }, 00:23:20.196 "claimed": false, 00:23:20.196 "zoned": false, 00:23:20.196 "supported_io_types": { 00:23:20.196 "read": true, 00:23:20.196 "write": true, 00:23:20.196 "unmap": false, 00:23:20.196 "flush": false, 00:23:20.196 "reset": true, 00:23:20.196 "nvme_admin": false, 00:23:20.196 "nvme_io": false, 00:23:20.196 "nvme_io_md": false, 00:23:20.196 "write_zeroes": true, 00:23:20.196 "zcopy": false, 00:23:20.196 "get_zone_info": false, 00:23:20.196 "zone_management": false, 00:23:20.196 "zone_append": false, 00:23:20.196 "compare": false, 00:23:20.196 "compare_and_write": false, 00:23:20.196 "abort": false, 00:23:20.196 "seek_hole": false, 00:23:20.196 "seek_data": false, 00:23:20.196 "copy": false, 00:23:20.196 "nvme_iov_md": false 00:23:20.196 }, 00:23:20.196 "memory_domains": [ 00:23:20.196 { 00:23:20.196 "dma_device_id": "system", 00:23:20.196 "dma_device_type": 1 00:23:20.196 }, 00:23:20.196 { 00:23:20.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.196 "dma_device_type": 2 00:23:20.196 }, 00:23:20.196 { 00:23:20.196 "dma_device_id": "system", 00:23:20.196 "dma_device_type": 1 00:23:20.196 }, 00:23:20.196 { 00:23:20.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.196 "dma_device_type": 2 00:23:20.196 }, 00:23:20.196 { 00:23:20.196 "dma_device_id": "system", 00:23:20.196 "dma_device_type": 1 00:23:20.196 }, 00:23:20.196 { 00:23:20.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.196 "dma_device_type": 2 00:23:20.196 } 00:23:20.196 ], 00:23:20.196 "driver_specific": { 00:23:20.196 "raid": { 00:23:20.196 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:20.196 "strip_size_kb": 0, 00:23:20.196 "state": "online", 00:23:20.196 "raid_level": "raid1", 00:23:20.196 "superblock": true, 00:23:20.196 "num_base_bdevs": 3, 00:23:20.196 "num_base_bdevs_discovered": 3, 00:23:20.196 "num_base_bdevs_operational": 3, 00:23:20.196 "base_bdevs_list": [ 00:23:20.196 { 00:23:20.196 "name": "pt1", 00:23:20.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.196 "is_configured": true, 00:23:20.196 "data_offset": 2048, 00:23:20.196 "data_size": 63488 00:23:20.196 }, 00:23:20.196 { 00:23:20.196 "name": "pt2", 00:23:20.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.196 "is_configured": true, 00:23:20.196 "data_offset": 2048, 00:23:20.196 "data_size": 63488 00:23:20.196 }, 00:23:20.196 { 00:23:20.196 "name": "pt3", 00:23:20.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:20.196 "is_configured": true, 00:23:20.196 "data_offset": 2048, 00:23:20.196 "data_size": 63488 00:23:20.196 } 00:23:20.196 ] 00:23:20.196 } 00:23:20.196 } 00:23:20.196 }' 00:23:20.196 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:20.196 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:20.196 pt2 00:23:20.196 pt3' 00:23:20.196 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:20.196 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:20.196 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:20.455 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:20.455 "name": "pt1", 00:23:20.455 "aliases": [ 00:23:20.455 "00000000-0000-0000-0000-000000000001" 00:23:20.455 ], 00:23:20.455 "product_name": "passthru", 00:23:20.455 "block_size": 512, 00:23:20.455 "num_blocks": 65536, 00:23:20.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.455 "assigned_rate_limits": { 00:23:20.455 "rw_ios_per_sec": 0, 00:23:20.455 "rw_mbytes_per_sec": 0, 00:23:20.455 "r_mbytes_per_sec": 0, 00:23:20.455 "w_mbytes_per_sec": 0 00:23:20.455 }, 00:23:20.455 "claimed": true, 00:23:20.455 "claim_type": "exclusive_write", 00:23:20.455 "zoned": false, 00:23:20.455 "supported_io_types": { 00:23:20.456 "read": true, 00:23:20.456 "write": true, 00:23:20.456 "unmap": true, 00:23:20.456 "flush": true, 00:23:20.456 "reset": true, 00:23:20.456 "nvme_admin": false, 00:23:20.456 "nvme_io": false, 00:23:20.456 "nvme_io_md": false, 00:23:20.456 "write_zeroes": true, 00:23:20.456 "zcopy": true, 00:23:20.456 "get_zone_info": false, 00:23:20.456 "zone_management": false, 00:23:20.456 "zone_append": false, 00:23:20.456 "compare": false, 00:23:20.456 "compare_and_write": false, 00:23:20.456 "abort": true, 00:23:20.456 "seek_hole": false, 00:23:20.456 "seek_data": false, 00:23:20.456 "copy": true, 00:23:20.456 "nvme_iov_md": false 00:23:20.456 }, 00:23:20.456 "memory_domains": [ 00:23:20.456 { 00:23:20.456 "dma_device_id": "system", 00:23:20.456 "dma_device_type": 1 00:23:20.456 }, 00:23:20.456 { 00:23:20.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.456 "dma_device_type": 2 00:23:20.456 } 00:23:20.456 ], 00:23:20.456 "driver_specific": { 00:23:20.456 "passthru": { 00:23:20.456 "name": "pt1", 00:23:20.456 "base_bdev_name": "malloc1" 00:23:20.456 } 00:23:20.456 } 00:23:20.456 }' 00:23:20.456 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:20.456 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:20.714 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:20.972 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:20.972 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:20.972 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:20.972 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:20.972 08:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:21.230 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:21.230 "name": "pt2", 00:23:21.230 "aliases": [ 00:23:21.230 "00000000-0000-0000-0000-000000000002" 00:23:21.230 ], 00:23:21.230 "product_name": "passthru", 00:23:21.230 "block_size": 512, 00:23:21.230 "num_blocks": 65536, 00:23:21.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:21.230 "assigned_rate_limits": { 00:23:21.230 "rw_ios_per_sec": 0, 00:23:21.230 "rw_mbytes_per_sec": 0, 00:23:21.230 "r_mbytes_per_sec": 0, 00:23:21.230 "w_mbytes_per_sec": 0 00:23:21.230 }, 00:23:21.230 "claimed": true, 00:23:21.230 "claim_type": "exclusive_write", 00:23:21.231 "zoned": false, 00:23:21.231 "supported_io_types": { 00:23:21.231 "read": true, 00:23:21.231 "write": true, 00:23:21.231 "unmap": true, 00:23:21.231 "flush": true, 00:23:21.231 "reset": true, 00:23:21.231 "nvme_admin": false, 00:23:21.231 "nvme_io": false, 00:23:21.231 "nvme_io_md": false, 00:23:21.231 "write_zeroes": true, 00:23:21.231 "zcopy": true, 00:23:21.231 "get_zone_info": false, 00:23:21.231 "zone_management": false, 00:23:21.231 "zone_append": false, 00:23:21.231 "compare": false, 00:23:21.231 "compare_and_write": false, 00:23:21.231 "abort": true, 00:23:21.231 "seek_hole": false, 00:23:21.231 "seek_data": false, 00:23:21.231 "copy": true, 00:23:21.231 "nvme_iov_md": false 00:23:21.231 }, 00:23:21.231 "memory_domains": [ 00:23:21.231 { 00:23:21.231 "dma_device_id": "system", 00:23:21.231 "dma_device_type": 1 00:23:21.231 }, 00:23:21.231 { 00:23:21.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.231 "dma_device_type": 2 00:23:21.231 } 00:23:21.231 ], 00:23:21.231 "driver_specific": { 00:23:21.231 "passthru": { 00:23:21.231 "name": "pt2", 00:23:21.231 "base_bdev_name": "malloc2" 00:23:21.231 } 00:23:21.231 } 00:23:21.231 }' 00:23:21.231 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.231 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.231 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:21.231 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.231 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:21.489 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:21.748 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:21.748 "name": "pt3", 00:23:21.748 "aliases": [ 00:23:21.748 "00000000-0000-0000-0000-000000000003" 00:23:21.748 ], 00:23:21.748 "product_name": "passthru", 00:23:21.748 "block_size": 512, 00:23:21.748 "num_blocks": 65536, 00:23:21.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:21.748 "assigned_rate_limits": { 00:23:21.748 "rw_ios_per_sec": 0, 00:23:21.748 "rw_mbytes_per_sec": 0, 00:23:21.748 "r_mbytes_per_sec": 0, 00:23:21.748 "w_mbytes_per_sec": 0 00:23:21.748 }, 00:23:21.748 "claimed": true, 00:23:21.748 "claim_type": "exclusive_write", 00:23:21.748 "zoned": false, 00:23:21.748 "supported_io_types": { 00:23:21.748 "read": true, 00:23:21.748 "write": true, 00:23:21.748 "unmap": true, 00:23:21.748 "flush": true, 00:23:21.748 "reset": true, 00:23:21.748 "nvme_admin": false, 00:23:21.748 "nvme_io": false, 00:23:21.748 "nvme_io_md": false, 00:23:21.748 "write_zeroes": true, 00:23:21.748 "zcopy": true, 00:23:21.748 "get_zone_info": false, 00:23:21.748 "zone_management": false, 00:23:21.748 "zone_append": false, 00:23:21.748 "compare": false, 00:23:21.748 "compare_and_write": false, 00:23:21.748 "abort": true, 00:23:21.748 "seek_hole": false, 00:23:21.748 "seek_data": false, 00:23:21.748 "copy": true, 00:23:21.748 "nvme_iov_md": false 00:23:21.748 }, 00:23:21.748 "memory_domains": [ 00:23:21.748 { 00:23:21.748 "dma_device_id": "system", 00:23:21.748 "dma_device_type": 1 00:23:21.748 }, 00:23:21.748 { 00:23:21.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.748 "dma_device_type": 2 00:23:21.748 } 00:23:21.748 ], 00:23:21.748 "driver_specific": { 00:23:21.748 "passthru": { 00:23:21.748 "name": "pt3", 00:23:21.748 "base_bdev_name": "malloc3" 00:23:21.748 } 00:23:21.748 } 00:23:21.748 }' 00:23:21.748 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.748 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:22.007 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:22.007 08:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:22.007 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:22.007 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:22.007 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:22.007 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:22.265 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:22.265 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:22.265 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:22.265 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:22.265 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:22.265 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:22.560 [2024-07-12 08:49:57.540183] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.560 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8af9cd96-9055-4e9a-a66e-bb8e75682e2e 00:23:22.560 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8af9cd96-9055-4e9a-a66e-bb8e75682e2e ']' 00:23:22.560 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.823 [2024-07-12 08:49:57.811885] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.823 [2024-07-12 08:49:57.811917] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.823 [2024-07-12 08:49:57.811995] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.823 [2024-07-12 08:49:57.812102] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.823 [2024-07-12 08:49:57.812116] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:23:22.824 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.824 08:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:23.081 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:23.081 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:23.081 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.081 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:23.081 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.081 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:23.338 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.338 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:23.596 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:23.596 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:23.853 08:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:24.109 [2024-07-12 08:49:59.172178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:24.109 [2024-07-12 08:49:59.173946] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:24.109 [2024-07-12 08:49:59.174009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:24.109 [2024-07-12 08:49:59.174067] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:24.109 [2024-07-12 08:49:59.174182] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:24.109 [2024-07-12 08:49:59.174255] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:24.109 [2024-07-12 08:49:59.174286] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.109 [2024-07-12 08:49:59.174297] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:23:24.109 request: 00:23:24.109 { 00:23:24.109 "name": "raid_bdev1", 00:23:24.109 "raid_level": "raid1", 00:23:24.109 "base_bdevs": [ 00:23:24.109 "malloc1", 00:23:24.109 "malloc2", 00:23:24.109 "malloc3" 00:23:24.109 ], 00:23:24.109 "superblock": false, 00:23:24.109 "method": "bdev_raid_create", 00:23:24.109 "req_id": 1 00:23:24.109 } 00:23:24.109 Got JSON-RPC error response 00:23:24.109 response: 00:23:24.109 { 00:23:24.109 "code": -17, 00:23:24.109 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:24.109 } 00:23:24.109 08:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:23:24.109 08:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.109 08:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.109 08:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.109 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.109 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:24.365 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:24.365 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:24.365 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:24.622 [2024-07-12 08:49:59.620221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:24.622 [2024-07-12 08:49:59.620337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.622 [2024-07-12 08:49:59.620384] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:24.622 [2024-07-12 08:49:59.620421] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.622 [2024-07-12 08:49:59.622589] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.622 [2024-07-12 08:49:59.622634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:24.622 [2024-07-12 08:49:59.622762] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:24.622 [2024-07-12 08:49:59.622820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:24.622 pt1 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.622 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.879 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.879 "name": "raid_bdev1", 00:23:24.879 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:24.879 "strip_size_kb": 0, 00:23:24.879 "state": "configuring", 00:23:24.879 "raid_level": "raid1", 00:23:24.879 "superblock": true, 00:23:24.879 "num_base_bdevs": 3, 00:23:24.879 "num_base_bdevs_discovered": 1, 00:23:24.879 "num_base_bdevs_operational": 3, 00:23:24.879 "base_bdevs_list": [ 00:23:24.879 { 00:23:24.879 "name": "pt1", 00:23:24.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:24.879 "is_configured": true, 00:23:24.879 "data_offset": 2048, 00:23:24.879 "data_size": 63488 00:23:24.879 }, 00:23:24.879 { 00:23:24.879 "name": null, 00:23:24.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:24.879 "is_configured": false, 00:23:24.879 "data_offset": 2048, 00:23:24.879 "data_size": 63488 00:23:24.879 }, 00:23:24.879 { 00:23:24.879 "name": null, 00:23:24.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:24.879 "is_configured": false, 00:23:24.879 "data_offset": 2048, 00:23:24.879 "data_size": 63488 00:23:24.879 } 00:23:24.879 ] 00:23:24.879 }' 00:23:24.879 08:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.879 08:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.470 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:23:25.470 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:25.729 [2024-07-12 08:50:00.744741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:25.729 [2024-07-12 08:50:00.744854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.729 [2024-07-12 08:50:00.744893] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:25.729 [2024-07-12 08:50:00.744913] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.729 [2024-07-12 08:50:00.745498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.729 [2024-07-12 08:50:00.745557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:25.729 [2024-07-12 08:50:00.745672] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:25.729 [2024-07-12 08:50:00.745707] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:25.729 pt2 00:23:25.729 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:25.987 [2024-07-12 08:50:00.948863] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.987 08:50:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.245 08:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.245 "name": "raid_bdev1", 00:23:26.245 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:26.245 "strip_size_kb": 0, 00:23:26.245 "state": "configuring", 00:23:26.245 "raid_level": "raid1", 00:23:26.245 "superblock": true, 00:23:26.245 "num_base_bdevs": 3, 00:23:26.245 "num_base_bdevs_discovered": 1, 00:23:26.245 "num_base_bdevs_operational": 3, 00:23:26.245 "base_bdevs_list": [ 00:23:26.245 { 00:23:26.245 "name": "pt1", 00:23:26.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:26.245 "is_configured": true, 00:23:26.245 "data_offset": 2048, 00:23:26.245 "data_size": 63488 00:23:26.245 }, 00:23:26.245 { 00:23:26.245 "name": null, 00:23:26.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.245 "is_configured": false, 00:23:26.245 "data_offset": 2048, 00:23:26.245 "data_size": 63488 00:23:26.245 }, 00:23:26.245 { 00:23:26.245 "name": null, 00:23:26.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:26.245 "is_configured": false, 00:23:26.245 "data_offset": 2048, 00:23:26.245 "data_size": 63488 00:23:26.245 } 00:23:26.245 ] 00:23:26.245 }' 00:23:26.245 08:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.245 08:50:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.810 08:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:26.810 08:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:26.810 08:50:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:27.068 [2024-07-12 08:50:02.089050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:27.068 [2024-07-12 08:50:02.089177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.068 [2024-07-12 08:50:02.089224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:27.068 [2024-07-12 08:50:02.089253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.068 [2024-07-12 08:50:02.089846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.068 [2024-07-12 08:50:02.089918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:27.068 [2024-07-12 08:50:02.090022] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:27.068 [2024-07-12 08:50:02.090049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:27.068 pt2 00:23:27.068 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:27.068 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:27.068 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:27.326 [2024-07-12 08:50:02.301073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:27.326 [2024-07-12 08:50:02.301178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.326 [2024-07-12 08:50:02.301215] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:27.326 [2024-07-12 08:50:02.301237] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.326 [2024-07-12 08:50:02.301721] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.326 [2024-07-12 08:50:02.301765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:27.326 [2024-07-12 08:50:02.301896] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:27.326 [2024-07-12 08:50:02.301936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:27.326 [2024-07-12 08:50:02.302081] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:23:27.326 [2024-07-12 08:50:02.302104] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:27.326 [2024-07-12 08:50:02.302199] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:27.326 [2024-07-12 08:50:02.302517] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:23:27.326 [2024-07-12 08:50:02.302540] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:23:27.326 [2024-07-12 08:50:02.302678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.326 pt3 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.326 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.584 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.584 "name": "raid_bdev1", 00:23:27.584 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:27.584 "strip_size_kb": 0, 00:23:27.584 "state": "online", 00:23:27.584 "raid_level": "raid1", 00:23:27.584 "superblock": true, 00:23:27.584 "num_base_bdevs": 3, 00:23:27.584 "num_base_bdevs_discovered": 3, 00:23:27.584 "num_base_bdevs_operational": 3, 00:23:27.584 "base_bdevs_list": [ 00:23:27.584 { 00:23:27.584 "name": "pt1", 00:23:27.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:27.584 "is_configured": true, 00:23:27.584 "data_offset": 2048, 00:23:27.584 "data_size": 63488 00:23:27.584 }, 00:23:27.585 { 00:23:27.585 "name": "pt2", 00:23:27.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:27.585 "is_configured": true, 00:23:27.585 "data_offset": 2048, 00:23:27.585 "data_size": 63488 00:23:27.585 }, 00:23:27.585 { 00:23:27.585 "name": "pt3", 00:23:27.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:27.585 "is_configured": true, 00:23:27.585 "data_offset": 2048, 00:23:27.585 "data_size": 63488 00:23:27.585 } 00:23:27.585 ] 00:23:27.585 }' 00:23:27.585 08:50:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.585 08:50:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:28.149 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:28.406 [2024-07-12 08:50:03.513701] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.406 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:28.406 "name": "raid_bdev1", 00:23:28.406 "aliases": [ 00:23:28.406 "8af9cd96-9055-4e9a-a66e-bb8e75682e2e" 00:23:28.406 ], 00:23:28.406 "product_name": "Raid Volume", 00:23:28.406 "block_size": 512, 00:23:28.406 "num_blocks": 63488, 00:23:28.406 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:28.406 "assigned_rate_limits": { 00:23:28.406 "rw_ios_per_sec": 0, 00:23:28.406 "rw_mbytes_per_sec": 0, 00:23:28.406 "r_mbytes_per_sec": 0, 00:23:28.406 "w_mbytes_per_sec": 0 00:23:28.406 }, 00:23:28.406 "claimed": false, 00:23:28.406 "zoned": false, 00:23:28.406 "supported_io_types": { 00:23:28.406 "read": true, 00:23:28.406 "write": true, 00:23:28.406 "unmap": false, 00:23:28.406 "flush": false, 00:23:28.406 "reset": true, 00:23:28.406 "nvme_admin": false, 00:23:28.406 "nvme_io": false, 00:23:28.406 "nvme_io_md": false, 00:23:28.406 "write_zeroes": true, 00:23:28.406 "zcopy": false, 00:23:28.406 "get_zone_info": false, 00:23:28.406 "zone_management": false, 00:23:28.406 "zone_append": false, 00:23:28.406 "compare": false, 00:23:28.406 "compare_and_write": false, 00:23:28.406 "abort": false, 00:23:28.406 "seek_hole": false, 00:23:28.406 "seek_data": false, 00:23:28.406 "copy": false, 00:23:28.406 "nvme_iov_md": false 00:23:28.406 }, 00:23:28.406 "memory_domains": [ 00:23:28.406 { 00:23:28.406 "dma_device_id": "system", 00:23:28.406 "dma_device_type": 1 00:23:28.406 }, 00:23:28.406 { 00:23:28.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.406 "dma_device_type": 2 00:23:28.406 }, 00:23:28.406 { 00:23:28.406 "dma_device_id": "system", 00:23:28.406 "dma_device_type": 1 00:23:28.406 }, 00:23:28.406 { 00:23:28.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.406 "dma_device_type": 2 00:23:28.406 }, 00:23:28.406 { 00:23:28.406 "dma_device_id": "system", 00:23:28.406 "dma_device_type": 1 00:23:28.406 }, 00:23:28.406 { 00:23:28.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.406 "dma_device_type": 2 00:23:28.406 } 00:23:28.406 ], 00:23:28.406 "driver_specific": { 00:23:28.406 "raid": { 00:23:28.406 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:28.406 "strip_size_kb": 0, 00:23:28.406 "state": "online", 00:23:28.406 "raid_level": "raid1", 00:23:28.406 "superblock": true, 00:23:28.406 "num_base_bdevs": 3, 00:23:28.406 "num_base_bdevs_discovered": 3, 00:23:28.406 "num_base_bdevs_operational": 3, 00:23:28.406 "base_bdevs_list": [ 00:23:28.406 { 00:23:28.406 "name": "pt1", 00:23:28.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.406 "is_configured": true, 00:23:28.406 "data_offset": 2048, 00:23:28.406 "data_size": 63488 00:23:28.406 }, 00:23:28.407 { 00:23:28.407 "name": "pt2", 00:23:28.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.407 "is_configured": true, 00:23:28.407 "data_offset": 2048, 00:23:28.407 "data_size": 63488 00:23:28.407 }, 00:23:28.407 { 00:23:28.407 "name": "pt3", 00:23:28.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:28.407 "is_configured": true, 00:23:28.407 "data_offset": 2048, 00:23:28.407 "data_size": 63488 00:23:28.407 } 00:23:28.407 ] 00:23:28.407 } 00:23:28.407 } 00:23:28.407 }' 00:23:28.407 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.407 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:28.407 pt2 00:23:28.407 pt3' 00:23:28.407 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.407 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:28.407 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:28.970 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:28.970 "name": "pt1", 00:23:28.970 "aliases": [ 00:23:28.970 "00000000-0000-0000-0000-000000000001" 00:23:28.970 ], 00:23:28.970 "product_name": "passthru", 00:23:28.970 "block_size": 512, 00:23:28.970 "num_blocks": 65536, 00:23:28.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.970 "assigned_rate_limits": { 00:23:28.970 "rw_ios_per_sec": 0, 00:23:28.970 "rw_mbytes_per_sec": 0, 00:23:28.970 "r_mbytes_per_sec": 0, 00:23:28.971 "w_mbytes_per_sec": 0 00:23:28.971 }, 00:23:28.971 "claimed": true, 00:23:28.971 "claim_type": "exclusive_write", 00:23:28.971 "zoned": false, 00:23:28.971 "supported_io_types": { 00:23:28.971 "read": true, 00:23:28.971 "write": true, 00:23:28.971 "unmap": true, 00:23:28.971 "flush": true, 00:23:28.971 "reset": true, 00:23:28.971 "nvme_admin": false, 00:23:28.971 "nvme_io": false, 00:23:28.971 "nvme_io_md": false, 00:23:28.971 "write_zeroes": true, 00:23:28.971 "zcopy": true, 00:23:28.971 "get_zone_info": false, 00:23:28.971 "zone_management": false, 00:23:28.971 "zone_append": false, 00:23:28.971 "compare": false, 00:23:28.971 "compare_and_write": false, 00:23:28.971 "abort": true, 00:23:28.971 "seek_hole": false, 00:23:28.971 "seek_data": false, 00:23:28.971 "copy": true, 00:23:28.971 "nvme_iov_md": false 00:23:28.971 }, 00:23:28.971 "memory_domains": [ 00:23:28.971 { 00:23:28.971 "dma_device_id": "system", 00:23:28.971 "dma_device_type": 1 00:23:28.971 }, 00:23:28.971 { 00:23:28.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.971 "dma_device_type": 2 00:23:28.971 } 00:23:28.971 ], 00:23:28.971 "driver_specific": { 00:23:28.971 "passthru": { 00:23:28.971 "name": "pt1", 00:23:28.971 "base_bdev_name": "malloc1" 00:23:28.971 } 00:23:28.971 } 00:23:28.971 }' 00:23:28.971 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.971 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.971 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:28.971 08:50:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.971 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.971 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:28.971 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.971 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.228 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.228 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.228 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.228 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.228 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.228 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:29.228 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.487 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.487 "name": "pt2", 00:23:29.487 "aliases": [ 00:23:29.487 "00000000-0000-0000-0000-000000000002" 00:23:29.487 ], 00:23:29.487 "product_name": "passthru", 00:23:29.487 "block_size": 512, 00:23:29.487 "num_blocks": 65536, 00:23:29.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.487 "assigned_rate_limits": { 00:23:29.487 "rw_ios_per_sec": 0, 00:23:29.487 "rw_mbytes_per_sec": 0, 00:23:29.487 "r_mbytes_per_sec": 0, 00:23:29.487 "w_mbytes_per_sec": 0 00:23:29.487 }, 00:23:29.487 "claimed": true, 00:23:29.487 "claim_type": "exclusive_write", 00:23:29.487 "zoned": false, 00:23:29.487 "supported_io_types": { 00:23:29.487 "read": true, 00:23:29.487 "write": true, 00:23:29.487 "unmap": true, 00:23:29.487 "flush": true, 00:23:29.487 "reset": true, 00:23:29.487 "nvme_admin": false, 00:23:29.487 "nvme_io": false, 00:23:29.487 "nvme_io_md": false, 00:23:29.487 "write_zeroes": true, 00:23:29.487 "zcopy": true, 00:23:29.487 "get_zone_info": false, 00:23:29.487 "zone_management": false, 00:23:29.487 "zone_append": false, 00:23:29.487 "compare": false, 00:23:29.487 "compare_and_write": false, 00:23:29.487 "abort": true, 00:23:29.487 "seek_hole": false, 00:23:29.487 "seek_data": false, 00:23:29.487 "copy": true, 00:23:29.487 "nvme_iov_md": false 00:23:29.487 }, 00:23:29.487 "memory_domains": [ 00:23:29.487 { 00:23:29.487 "dma_device_id": "system", 00:23:29.487 "dma_device_type": 1 00:23:29.487 }, 00:23:29.487 { 00:23:29.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.487 "dma_device_type": 2 00:23:29.487 } 00:23:29.487 ], 00:23:29.487 "driver_specific": { 00:23:29.487 "passthru": { 00:23:29.487 "name": "pt2", 00:23:29.487 "base_bdev_name": "malloc2" 00:23:29.487 } 00:23:29.487 } 00:23:29.487 }' 00:23:29.487 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.487 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.744 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.744 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.745 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.745 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:29.745 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.745 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.745 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.745 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.003 08:50:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.003 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.003 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.003 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:30.003 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.261 "name": "pt3", 00:23:30.261 "aliases": [ 00:23:30.261 "00000000-0000-0000-0000-000000000003" 00:23:30.261 ], 00:23:30.261 "product_name": "passthru", 00:23:30.261 "block_size": 512, 00:23:30.261 "num_blocks": 65536, 00:23:30.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.261 "assigned_rate_limits": { 00:23:30.261 "rw_ios_per_sec": 0, 00:23:30.261 "rw_mbytes_per_sec": 0, 00:23:30.261 "r_mbytes_per_sec": 0, 00:23:30.261 "w_mbytes_per_sec": 0 00:23:30.261 }, 00:23:30.261 "claimed": true, 00:23:30.261 "claim_type": "exclusive_write", 00:23:30.261 "zoned": false, 00:23:30.261 "supported_io_types": { 00:23:30.261 "read": true, 00:23:30.261 "write": true, 00:23:30.261 "unmap": true, 00:23:30.261 "flush": true, 00:23:30.261 "reset": true, 00:23:30.261 "nvme_admin": false, 00:23:30.261 "nvme_io": false, 00:23:30.261 "nvme_io_md": false, 00:23:30.261 "write_zeroes": true, 00:23:30.261 "zcopy": true, 00:23:30.261 "get_zone_info": false, 00:23:30.261 "zone_management": false, 00:23:30.261 "zone_append": false, 00:23:30.261 "compare": false, 00:23:30.261 "compare_and_write": false, 00:23:30.261 "abort": true, 00:23:30.261 "seek_hole": false, 00:23:30.261 "seek_data": false, 00:23:30.261 "copy": true, 00:23:30.261 "nvme_iov_md": false 00:23:30.261 }, 00:23:30.261 "memory_domains": [ 00:23:30.261 { 00:23:30.261 "dma_device_id": "system", 00:23:30.261 "dma_device_type": 1 00:23:30.261 }, 00:23:30.261 { 00:23:30.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.261 "dma_device_type": 2 00:23:30.261 } 00:23:30.261 ], 00:23:30.261 "driver_specific": { 00:23:30.261 "passthru": { 00:23:30.261 "name": "pt3", 00:23:30.261 "base_bdev_name": "malloc3" 00:23:30.261 } 00:23:30.261 } 00:23:30.261 }' 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.261 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.520 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.520 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.520 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.520 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.520 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.520 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:30.520 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:30.778 [2024-07-12 08:50:05.874546] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.778 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8af9cd96-9055-4e9a-a66e-bb8e75682e2e '!=' 8af9cd96-9055-4e9a-a66e-bb8e75682e2e ']' 00:23:30.778 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:30.778 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:30.778 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:30.778 08:50:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:31.036 [2024-07-12 08:50:06.094313] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.036 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.295 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.295 "name": "raid_bdev1", 00:23:31.295 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:31.295 "strip_size_kb": 0, 00:23:31.295 "state": "online", 00:23:31.295 "raid_level": "raid1", 00:23:31.295 "superblock": true, 00:23:31.295 "num_base_bdevs": 3, 00:23:31.295 "num_base_bdevs_discovered": 2, 00:23:31.295 "num_base_bdevs_operational": 2, 00:23:31.295 "base_bdevs_list": [ 00:23:31.295 { 00:23:31.295 "name": null, 00:23:31.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.295 "is_configured": false, 00:23:31.295 "data_offset": 2048, 00:23:31.295 "data_size": 63488 00:23:31.295 }, 00:23:31.295 { 00:23:31.295 "name": "pt2", 00:23:31.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.295 "is_configured": true, 00:23:31.295 "data_offset": 2048, 00:23:31.295 "data_size": 63488 00:23:31.295 }, 00:23:31.295 { 00:23:31.295 "name": "pt3", 00:23:31.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:31.295 "is_configured": true, 00:23:31.295 "data_offset": 2048, 00:23:31.295 "data_size": 63488 00:23:31.295 } 00:23:31.295 ] 00:23:31.295 }' 00:23:31.295 08:50:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.295 08:50:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.862 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:32.120 [2024-07-12 08:50:07.290575] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.120 [2024-07-12 08:50:07.290615] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.120 [2024-07-12 08:50:07.290706] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.120 [2024-07-12 08:50:07.290777] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.120 [2024-07-12 08:50:07.290789] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:23:32.120 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.120 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:32.378 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:32.378 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:32.378 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:32.378 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:32.378 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:32.636 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:32.636 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:32.636 08:50:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:32.894 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:32.894 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:32.894 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:32.894 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:32.894 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:33.152 [2024-07-12 08:50:08.212787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:33.152 [2024-07-12 08:50:08.212905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.152 [2024-07-12 08:50:08.212965] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:33.152 [2024-07-12 08:50:08.212989] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.152 [2024-07-12 08:50:08.215493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.152 [2024-07-12 08:50:08.215541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:33.152 [2024-07-12 08:50:08.215691] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:33.152 [2024-07-12 08:50:08.215746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:33.152 pt2 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.152 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.410 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.410 "name": "raid_bdev1", 00:23:33.410 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:33.410 "strip_size_kb": 0, 00:23:33.410 "state": "configuring", 00:23:33.410 "raid_level": "raid1", 00:23:33.410 "superblock": true, 00:23:33.410 "num_base_bdevs": 3, 00:23:33.410 "num_base_bdevs_discovered": 1, 00:23:33.410 "num_base_bdevs_operational": 2, 00:23:33.410 "base_bdevs_list": [ 00:23:33.410 { 00:23:33.410 "name": null, 00:23:33.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.410 "is_configured": false, 00:23:33.410 "data_offset": 2048, 00:23:33.410 "data_size": 63488 00:23:33.410 }, 00:23:33.410 { 00:23:33.410 "name": "pt2", 00:23:33.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:33.410 "is_configured": true, 00:23:33.410 "data_offset": 2048, 00:23:33.410 "data_size": 63488 00:23:33.410 }, 00:23:33.410 { 00:23:33.410 "name": null, 00:23:33.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:33.410 "is_configured": false, 00:23:33.410 "data_offset": 2048, 00:23:33.410 "data_size": 63488 00:23:33.410 } 00:23:33.410 ] 00:23:33.410 }' 00:23:33.410 08:50:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.410 08:50:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.976 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:23:33.976 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:33.976 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:23:33.976 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:34.234 [2024-07-12 08:50:09.337188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:34.234 [2024-07-12 08:50:09.337319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.235 [2024-07-12 08:50:09.337389] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:34.235 [2024-07-12 08:50:09.337434] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.235 [2024-07-12 08:50:09.338000] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.235 [2024-07-12 08:50:09.338059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:34.235 [2024-07-12 08:50:09.338231] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:34.235 [2024-07-12 08:50:09.338275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:34.235 [2024-07-12 08:50:09.338436] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:23:34.235 [2024-07-12 08:50:09.338457] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:34.235 [2024-07-12 08:50:09.338600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:34.235 [2024-07-12 08:50:09.339085] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:23:34.235 [2024-07-12 08:50:09.339125] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:23:34.235 [2024-07-12 08:50:09.339314] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.235 pt3 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.235 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.493 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.493 "name": "raid_bdev1", 00:23:34.493 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:34.493 "strip_size_kb": 0, 00:23:34.493 "state": "online", 00:23:34.493 "raid_level": "raid1", 00:23:34.493 "superblock": true, 00:23:34.493 "num_base_bdevs": 3, 00:23:34.493 "num_base_bdevs_discovered": 2, 00:23:34.493 "num_base_bdevs_operational": 2, 00:23:34.493 "base_bdevs_list": [ 00:23:34.493 { 00:23:34.493 "name": null, 00:23:34.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.493 "is_configured": false, 00:23:34.493 "data_offset": 2048, 00:23:34.493 "data_size": 63488 00:23:34.493 }, 00:23:34.493 { 00:23:34.493 "name": "pt2", 00:23:34.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.493 "is_configured": true, 00:23:34.493 "data_offset": 2048, 00:23:34.493 "data_size": 63488 00:23:34.493 }, 00:23:34.493 { 00:23:34.493 "name": "pt3", 00:23:34.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:34.493 "is_configured": true, 00:23:34.493 "data_offset": 2048, 00:23:34.493 "data_size": 63488 00:23:34.493 } 00:23:34.493 ] 00:23:34.493 }' 00:23:34.493 08:50:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.493 08:50:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.428 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:35.428 [2024-07-12 08:50:10.525491] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.428 [2024-07-12 08:50:10.525527] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.428 [2024-07-12 08:50:10.525634] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.428 [2024-07-12 08:50:10.525695] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.428 [2024-07-12 08:50:10.525706] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:23:35.428 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.428 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:35.700 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:35.700 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:35.700 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:23:35.700 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:23:35.700 08:50:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:35.997 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:36.270 [2024-07-12 08:50:11.224686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:36.270 [2024-07-12 08:50:11.224803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.270 [2024-07-12 08:50:11.224845] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:36.270 [2024-07-12 08:50:11.224865] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.270 [2024-07-12 08:50:11.227146] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.270 [2024-07-12 08:50:11.227229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:36.270 [2024-07-12 08:50:11.227351] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:36.270 [2024-07-12 08:50:11.227434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:36.270 [2024-07-12 08:50:11.227657] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:36.270 [2024-07-12 08:50:11.227684] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.270 [2024-07-12 08:50:11.227712] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:23:36.270 [2024-07-12 08:50:11.227781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:36.270 pt1 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.270 "name": "raid_bdev1", 00:23:36.270 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:36.270 "strip_size_kb": 0, 00:23:36.270 "state": "configuring", 00:23:36.270 "raid_level": "raid1", 00:23:36.270 "superblock": true, 00:23:36.270 "num_base_bdevs": 3, 00:23:36.270 "num_base_bdevs_discovered": 1, 00:23:36.270 "num_base_bdevs_operational": 2, 00:23:36.270 "base_bdevs_list": [ 00:23:36.270 { 00:23:36.270 "name": null, 00:23:36.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.270 "is_configured": false, 00:23:36.270 "data_offset": 2048, 00:23:36.270 "data_size": 63488 00:23:36.270 }, 00:23:36.270 { 00:23:36.270 "name": "pt2", 00:23:36.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.270 "is_configured": true, 00:23:36.270 "data_offset": 2048, 00:23:36.270 "data_size": 63488 00:23:36.270 }, 00:23:36.270 { 00:23:36.270 "name": null, 00:23:36.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:36.270 "is_configured": false, 00:23:36.270 "data_offset": 2048, 00:23:36.270 "data_size": 63488 00:23:36.270 } 00:23:36.270 ] 00:23:36.270 }' 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.270 08:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.202 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:37.202 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:37.460 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:23:37.460 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:37.460 [2024-07-12 08:50:12.649204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:37.460 [2024-07-12 08:50:12.649338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.460 [2024-07-12 08:50:12.649375] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:37.460 [2024-07-12 08:50:12.649404] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:37.460 [2024-07-12 08:50:12.650005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:37.460 [2024-07-12 08:50:12.650076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:37.460 [2024-07-12 08:50:12.650174] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:37.460 [2024-07-12 08:50:12.650228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:37.460 [2024-07-12 08:50:12.650418] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:23:37.460 [2024-07-12 08:50:12.650439] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:37.460 [2024-07-12 08:50:12.650554] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:37.460 [2024-07-12 08:50:12.650934] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:23:37.460 [2024-07-12 08:50:12.650957] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:23:37.460 [2024-07-12 08:50:12.651111] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.460 pt3 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.718 "name": "raid_bdev1", 00:23:37.718 "uuid": "8af9cd96-9055-4e9a-a66e-bb8e75682e2e", 00:23:37.718 "strip_size_kb": 0, 00:23:37.718 "state": "online", 00:23:37.718 "raid_level": "raid1", 00:23:37.718 "superblock": true, 00:23:37.718 "num_base_bdevs": 3, 00:23:37.718 "num_base_bdevs_discovered": 2, 00:23:37.718 "num_base_bdevs_operational": 2, 00:23:37.718 "base_bdevs_list": [ 00:23:37.718 { 00:23:37.718 "name": null, 00:23:37.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.718 "is_configured": false, 00:23:37.718 "data_offset": 2048, 00:23:37.718 "data_size": 63488 00:23:37.718 }, 00:23:37.718 { 00:23:37.718 "name": "pt2", 00:23:37.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:37.718 "is_configured": true, 00:23:37.718 "data_offset": 2048, 00:23:37.718 "data_size": 63488 00:23:37.718 }, 00:23:37.718 { 00:23:37.718 "name": "pt3", 00:23:37.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:37.718 "is_configured": true, 00:23:37.718 "data_offset": 2048, 00:23:37.718 "data_size": 63488 00:23:37.718 } 00:23:37.718 ] 00:23:37.718 }' 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.718 08:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.663 08:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:38.664 08:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:38.921 08:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:38.921 08:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:38.921 08:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:39.179 [2024-07-12 08:50:14.128797] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 8af9cd96-9055-4e9a-a66e-bb8e75682e2e '!=' 8af9cd96-9055-4e9a-a66e-bb8e75682e2e ']' 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 134406 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 134406 ']' 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 134406 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134406 00:23:39.179 killing process with pid 134406 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134406' 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 134406 00:23:39.179 08:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 134406 00:23:39.179 [2024-07-12 08:50:14.165620] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:39.179 [2024-07-12 08:50:14.165722] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.179 [2024-07-12 08:50:14.165789] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.179 [2024-07-12 08:50:14.165807] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:23:39.436 [2024-07-12 08:50:14.392725] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:40.372 ************************************ 00:23:40.372 END TEST raid_superblock_test 00:23:40.372 ************************************ 00:23:40.372 08:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:40.372 00:23:40.372 real 0m24.038s 00:23:40.372 user 0m45.055s 00:23:40.372 sys 0m2.594s 00:23:40.372 08:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.372 08:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.372 08:50:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:40.372 08:50:15 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:23:40.372 08:50:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:40.372 08:50:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.372 08:50:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:40.372 ************************************ 00:23:40.372 START TEST raid_read_error_test 00:23:40.372 ************************************ 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.DJPWAzLuZ9 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135184 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135184 /var/tmp/spdk-raid.sock 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 135184 ']' 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:40.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.372 08:50:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.630 [2024-07-12 08:50:15.601639] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:23:40.630 [2024-07-12 08:50:15.601841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135184 ] 00:23:40.630 [2024-07-12 08:50:15.766900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.889 [2024-07-12 08:50:16.008208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.147 [2024-07-12 08:50:16.214106] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.405 08:50:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.405 08:50:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:41.405 08:50:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:41.405 08:50:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:41.663 BaseBdev1_malloc 00:23:41.664 08:50:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:41.922 true 00:23:41.922 08:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:42.180 [2024-07-12 08:50:17.242545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:42.180 [2024-07-12 08:50:17.242675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.180 [2024-07-12 08:50:17.242711] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:42.180 [2024-07-12 08:50:17.242732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.180 [2024-07-12 08:50:17.244813] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.180 [2024-07-12 08:50:17.244858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:42.180 BaseBdev1 00:23:42.180 08:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:42.180 08:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:42.437 BaseBdev2_malloc 00:23:42.437 08:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:42.695 true 00:23:42.695 08:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:42.953 [2024-07-12 08:50:17.977845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:42.953 [2024-07-12 08:50:17.978005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.953 [2024-07-12 08:50:17.978086] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:42.953 [2024-07-12 08:50:17.978147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.953 [2024-07-12 08:50:17.981062] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.953 [2024-07-12 08:50:17.981126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:42.953 BaseBdev2 00:23:42.953 08:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:42.953 08:50:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:43.211 BaseBdev3_malloc 00:23:43.211 08:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:43.469 true 00:23:43.469 08:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:43.727 [2024-07-12 08:50:18.731869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:43.727 [2024-07-12 08:50:18.731976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.727 [2024-07-12 08:50:18.732013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:43.727 [2024-07-12 08:50:18.732039] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.727 [2024-07-12 08:50:18.734403] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.727 [2024-07-12 08:50:18.734473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:43.727 BaseBdev3 00:23:43.727 08:50:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:43.985 [2024-07-12 08:50:19.004085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:43.985 [2024-07-12 08:50:19.006056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.985 [2024-07-12 08:50:19.006160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:43.985 [2024-07-12 08:50:19.006517] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:23:43.985 [2024-07-12 08:50:19.006543] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:43.985 [2024-07-12 08:50:19.006710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:43.985 [2024-07-12 08:50:19.007149] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:23:43.985 [2024-07-12 08:50:19.007171] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:23:43.985 [2024-07-12 08:50:19.007335] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.985 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.986 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.986 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.986 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.243 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.243 "name": "raid_bdev1", 00:23:44.243 "uuid": "7cebe2a1-6907-4104-8643-517e9faa8816", 00:23:44.243 "strip_size_kb": 0, 00:23:44.243 "state": "online", 00:23:44.243 "raid_level": "raid1", 00:23:44.243 "superblock": true, 00:23:44.243 "num_base_bdevs": 3, 00:23:44.243 "num_base_bdevs_discovered": 3, 00:23:44.243 "num_base_bdevs_operational": 3, 00:23:44.243 "base_bdevs_list": [ 00:23:44.243 { 00:23:44.243 "name": "BaseBdev1", 00:23:44.243 "uuid": "a6d8b458-316d-51a6-acd5-227a05d0cdd8", 00:23:44.243 "is_configured": true, 00:23:44.243 "data_offset": 2048, 00:23:44.243 "data_size": 63488 00:23:44.243 }, 00:23:44.243 { 00:23:44.243 "name": "BaseBdev2", 00:23:44.243 "uuid": "bf58b990-775b-5a33-b2c5-34447cd32f7f", 00:23:44.243 "is_configured": true, 00:23:44.243 "data_offset": 2048, 00:23:44.243 "data_size": 63488 00:23:44.243 }, 00:23:44.243 { 00:23:44.243 "name": "BaseBdev3", 00:23:44.243 "uuid": "ec57cbf3-c8a6-52f2-92ca-a25d4af44cff", 00:23:44.243 "is_configured": true, 00:23:44.243 "data_offset": 2048, 00:23:44.243 "data_size": 63488 00:23:44.243 } 00:23:44.243 ] 00:23:44.243 }' 00:23:44.243 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.243 08:50:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:44.809 08:50:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:45.067 [2024-07-12 08:50:20.069685] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:45.998 08:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.325 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.600 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.600 "name": "raid_bdev1", 00:23:46.600 "uuid": "7cebe2a1-6907-4104-8643-517e9faa8816", 00:23:46.600 "strip_size_kb": 0, 00:23:46.600 "state": "online", 00:23:46.600 "raid_level": "raid1", 00:23:46.600 "superblock": true, 00:23:46.600 "num_base_bdevs": 3, 00:23:46.600 "num_base_bdevs_discovered": 3, 00:23:46.600 "num_base_bdevs_operational": 3, 00:23:46.601 "base_bdevs_list": [ 00:23:46.601 { 00:23:46.601 "name": "BaseBdev1", 00:23:46.601 "uuid": "a6d8b458-316d-51a6-acd5-227a05d0cdd8", 00:23:46.601 "is_configured": true, 00:23:46.601 "data_offset": 2048, 00:23:46.601 "data_size": 63488 00:23:46.601 }, 00:23:46.601 { 00:23:46.601 "name": "BaseBdev2", 00:23:46.601 "uuid": "bf58b990-775b-5a33-b2c5-34447cd32f7f", 00:23:46.601 "is_configured": true, 00:23:46.601 "data_offset": 2048, 00:23:46.601 "data_size": 63488 00:23:46.601 }, 00:23:46.601 { 00:23:46.601 "name": "BaseBdev3", 00:23:46.601 "uuid": "ec57cbf3-c8a6-52f2-92ca-a25d4af44cff", 00:23:46.601 "is_configured": true, 00:23:46.601 "data_offset": 2048, 00:23:46.601 "data_size": 63488 00:23:46.601 } 00:23:46.601 ] 00:23:46.601 }' 00:23:46.601 08:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.601 08:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.165 08:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:47.423 [2024-07-12 08:50:22.569389] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:47.423 [2024-07-12 08:50:22.569448] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:47.423 [2024-07-12 08:50:22.572253] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:47.423 [2024-07-12 08:50:22.572333] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.423 [2024-07-12 08:50:22.572433] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:47.423 [2024-07-12 08:50:22.572446] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:23:47.423 0 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135184 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 135184 ']' 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 135184 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135184 00:23:47.423 killing process with pid 135184 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135184' 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 135184 00:23:47.423 08:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 135184 00:23:47.680 [2024-07-12 08:50:22.616969] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:47.680 [2024-07-12 08:50:22.821753] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.DJPWAzLuZ9 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:49.052 ************************************ 00:23:49.052 END TEST raid_read_error_test 00:23:49.052 ************************************ 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:49.052 00:23:49.052 real 0m8.469s 00:23:49.052 user 0m13.080s 00:23:49.052 sys 0m0.996s 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.052 08:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.052 08:50:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:49.052 08:50:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:23:49.052 08:50:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:49.052 08:50:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.052 08:50:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.052 ************************************ 00:23:49.052 START TEST raid_write_error_test 00:23:49.052 ************************************ 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:49.052 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.vifQ3s6zkY 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135410 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135410 /var/tmp/spdk-raid.sock 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 135410 ']' 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:49.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.053 08:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.053 [2024-07-12 08:50:24.120154] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:23:49.053 [2024-07-12 08:50:24.120443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135410 ] 00:23:49.310 [2024-07-12 08:50:24.279536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.567 [2024-07-12 08:50:24.526221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.567 [2024-07-12 08:50:24.714226] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.133 08:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.133 08:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:50.133 08:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:50.133 08:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:50.390 BaseBdev1_malloc 00:23:50.390 08:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:50.648 true 00:23:50.648 08:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:50.905 [2024-07-12 08:50:25.896076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:50.905 [2024-07-12 08:50:25.896244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.905 [2024-07-12 08:50:25.896305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:50.905 [2024-07-12 08:50:25.896329] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.905 [2024-07-12 08:50:25.898948] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.905 [2024-07-12 08:50:25.899012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:50.905 BaseBdev1 00:23:50.905 08:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:50.905 08:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:51.163 BaseBdev2_malloc 00:23:51.163 08:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:51.420 true 00:23:51.420 08:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:51.420 [2024-07-12 08:50:26.585010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:51.420 [2024-07-12 08:50:26.585122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.420 [2024-07-12 08:50:26.585161] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:51.420 [2024-07-12 08:50:26.585181] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.420 [2024-07-12 08:50:26.587399] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.421 [2024-07-12 08:50:26.587463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:51.421 BaseBdev2 00:23:51.421 08:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:51.421 08:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:51.678 BaseBdev3_malloc 00:23:51.678 08:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:51.935 true 00:23:51.935 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:52.193 [2024-07-12 08:50:27.314104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:52.193 [2024-07-12 08:50:27.314220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:52.193 [2024-07-12 08:50:27.314261] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:52.193 [2024-07-12 08:50:27.314286] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:52.193 [2024-07-12 08:50:27.316662] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:52.193 [2024-07-12 08:50:27.316733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:52.193 BaseBdev3 00:23:52.193 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:52.455 [2024-07-12 08:50:27.530174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.455 [2024-07-12 08:50:27.532217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:52.455 [2024-07-12 08:50:27.532323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:52.455 [2024-07-12 08:50:27.532582] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:23:52.455 [2024-07-12 08:50:27.532608] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:52.455 [2024-07-12 08:50:27.532729] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:52.455 [2024-07-12 08:50:27.533107] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:23:52.455 [2024-07-12 08:50:27.533132] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:23:52.455 [2024-07-12 08:50:27.533275] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.455 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.713 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.713 "name": "raid_bdev1", 00:23:52.713 "uuid": "afe90f24-ebbc-4ed3-8be1-14186e686484", 00:23:52.713 "strip_size_kb": 0, 00:23:52.713 "state": "online", 00:23:52.713 "raid_level": "raid1", 00:23:52.713 "superblock": true, 00:23:52.713 "num_base_bdevs": 3, 00:23:52.713 "num_base_bdevs_discovered": 3, 00:23:52.713 "num_base_bdevs_operational": 3, 00:23:52.713 "base_bdevs_list": [ 00:23:52.713 { 00:23:52.713 "name": "BaseBdev1", 00:23:52.713 "uuid": "15fcce35-5255-53e5-a3fc-82489d82f362", 00:23:52.713 "is_configured": true, 00:23:52.713 "data_offset": 2048, 00:23:52.713 "data_size": 63488 00:23:52.713 }, 00:23:52.713 { 00:23:52.713 "name": "BaseBdev2", 00:23:52.713 "uuid": "2d49450c-6274-5a5c-9d1a-9759ea95b8f6", 00:23:52.713 "is_configured": true, 00:23:52.713 "data_offset": 2048, 00:23:52.713 "data_size": 63488 00:23:52.713 }, 00:23:52.713 { 00:23:52.713 "name": "BaseBdev3", 00:23:52.713 "uuid": "7ddfcad5-f37c-5179-94bb-9ca7a8e9026d", 00:23:52.713 "is_configured": true, 00:23:52.713 "data_offset": 2048, 00:23:52.713 "data_size": 63488 00:23:52.713 } 00:23:52.713 ] 00:23:52.713 }' 00:23:52.713 08:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.713 08:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.278 08:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:53.278 08:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:53.535 [2024-07-12 08:50:28.531497] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:54.468 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:54.726 [2024-07-12 08:50:29.705040] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:54.726 [2024-07-12 08:50:29.705184] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:54.726 [2024-07-12 08:50:29.705478] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:23:54.726 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.727 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.984 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.984 "name": "raid_bdev1", 00:23:54.984 "uuid": "afe90f24-ebbc-4ed3-8be1-14186e686484", 00:23:54.984 "strip_size_kb": 0, 00:23:54.984 "state": "online", 00:23:54.984 "raid_level": "raid1", 00:23:54.984 "superblock": true, 00:23:54.984 "num_base_bdevs": 3, 00:23:54.984 "num_base_bdevs_discovered": 2, 00:23:54.984 "num_base_bdevs_operational": 2, 00:23:54.984 "base_bdevs_list": [ 00:23:54.984 { 00:23:54.984 "name": null, 00:23:54.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.984 "is_configured": false, 00:23:54.984 "data_offset": 2048, 00:23:54.984 "data_size": 63488 00:23:54.984 }, 00:23:54.984 { 00:23:54.984 "name": "BaseBdev2", 00:23:54.984 "uuid": "2d49450c-6274-5a5c-9d1a-9759ea95b8f6", 00:23:54.984 "is_configured": true, 00:23:54.984 "data_offset": 2048, 00:23:54.984 "data_size": 63488 00:23:54.984 }, 00:23:54.984 { 00:23:54.984 "name": "BaseBdev3", 00:23:54.984 "uuid": "7ddfcad5-f37c-5179-94bb-9ca7a8e9026d", 00:23:54.984 "is_configured": true, 00:23:54.984 "data_offset": 2048, 00:23:54.984 "data_size": 63488 00:23:54.984 } 00:23:54.984 ] 00:23:54.984 }' 00:23:54.984 08:50:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.984 08:50:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.549 08:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:55.808 [2024-07-12 08:50:30.911363] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.808 [2024-07-12 08:50:30.911437] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.808 [2024-07-12 08:50:30.914701] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.808 [2024-07-12 08:50:30.914780] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.808 [2024-07-12 08:50:30.914876] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.808 [2024-07-12 08:50:30.914888] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:23:55.808 0 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135410 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 135410 ']' 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 135410 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135410 00:23:55.808 killing process with pid 135410 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135410' 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 135410 00:23:55.808 08:50:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 135410 00:23:55.808 [2024-07-12 08:50:30.947667] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:56.066 [2024-07-12 08:50:31.136343] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.vifQ3s6zkY 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:57.477 ************************************ 00:23:57.477 END TEST raid_write_error_test 00:23:57.477 ************************************ 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:57.477 00:23:57.477 real 0m8.249s 00:23:57.477 user 0m12.717s 00:23:57.477 sys 0m0.942s 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:57.477 08:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.477 08:50:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:57.477 08:50:32 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:23:57.477 08:50:32 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:23:57.477 08:50:32 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:23:57.477 08:50:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:57.477 08:50:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.477 08:50:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:57.477 ************************************ 00:23:57.477 START TEST raid_state_function_test 00:23:57.477 ************************************ 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:23:57.477 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=135629 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135629' 00:23:57.478 Process raid pid: 135629 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 135629 /var/tmp/spdk-raid.sock 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 135629 ']' 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.478 08:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.478 [2024-07-12 08:50:32.436161] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:23:57.478 [2024-07-12 08:50:32.437023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.478 [2024-07-12 08:50:32.607468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.736 [2024-07-12 08:50:32.821634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.997 [2024-07-12 08:50:33.025254] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.254 08:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.254 08:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:23:58.254 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:58.511 [2024-07-12 08:50:33.682287] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:58.511 [2024-07-12 08:50:33.682581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:58.511 [2024-07-12 08:50:33.682724] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:58.511 [2024-07-12 08:50:33.682785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:58.511 [2024-07-12 08:50:33.682911] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:58.511 [2024-07-12 08:50:33.682979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:58.511 [2024-07-12 08:50:33.683127] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:58.511 [2024-07-12 08:50:33.683199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.511 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.768 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.768 "name": "Existed_Raid", 00:23:58.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.768 "strip_size_kb": 64, 00:23:58.768 "state": "configuring", 00:23:58.768 "raid_level": "raid0", 00:23:58.768 "superblock": false, 00:23:58.768 "num_base_bdevs": 4, 00:23:58.768 "num_base_bdevs_discovered": 0, 00:23:58.768 "num_base_bdevs_operational": 4, 00:23:58.768 "base_bdevs_list": [ 00:23:58.768 { 00:23:58.768 "name": "BaseBdev1", 00:23:58.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.769 "is_configured": false, 00:23:58.769 "data_offset": 0, 00:23:58.769 "data_size": 0 00:23:58.769 }, 00:23:58.769 { 00:23:58.769 "name": "BaseBdev2", 00:23:58.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.769 "is_configured": false, 00:23:58.769 "data_offset": 0, 00:23:58.769 "data_size": 0 00:23:58.769 }, 00:23:58.769 { 00:23:58.769 "name": "BaseBdev3", 00:23:58.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.769 "is_configured": false, 00:23:58.769 "data_offset": 0, 00:23:58.769 "data_size": 0 00:23:58.769 }, 00:23:58.769 { 00:23:58.769 "name": "BaseBdev4", 00:23:58.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.769 "is_configured": false, 00:23:58.769 "data_offset": 0, 00:23:58.769 "data_size": 0 00:23:58.769 } 00:23:58.769 ] 00:23:58.769 }' 00:23:58.769 08:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.769 08:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.703 08:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:59.703 [2024-07-12 08:50:34.850455] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:59.703 [2024-07-12 08:50:34.850706] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:59.703 08:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:59.961 [2024-07-12 08:50:35.126572] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:59.961 [2024-07-12 08:50:35.126875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:59.961 [2024-07-12 08:50:35.126977] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:59.961 [2024-07-12 08:50:35.127130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:59.961 [2024-07-12 08:50:35.127242] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:59.961 [2024-07-12 08:50:35.127322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:59.961 [2024-07-12 08:50:35.127418] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:59.961 [2024-07-12 08:50:35.127477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:59.961 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:00.220 [2024-07-12 08:50:35.357341] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:00.220 BaseBdev1 00:24:00.220 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:00.220 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:00.220 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:00.220 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:00.220 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:00.220 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:00.220 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:00.479 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:00.737 [ 00:24:00.737 { 00:24:00.737 "name": "BaseBdev1", 00:24:00.737 "aliases": [ 00:24:00.737 "1441f3d0-493c-43a8-a593-0e1c756407e6" 00:24:00.737 ], 00:24:00.737 "product_name": "Malloc disk", 00:24:00.737 "block_size": 512, 00:24:00.737 "num_blocks": 65536, 00:24:00.737 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:00.737 "assigned_rate_limits": { 00:24:00.737 "rw_ios_per_sec": 0, 00:24:00.737 "rw_mbytes_per_sec": 0, 00:24:00.737 "r_mbytes_per_sec": 0, 00:24:00.737 "w_mbytes_per_sec": 0 00:24:00.737 }, 00:24:00.737 "claimed": true, 00:24:00.737 "claim_type": "exclusive_write", 00:24:00.737 "zoned": false, 00:24:00.737 "supported_io_types": { 00:24:00.737 "read": true, 00:24:00.737 "write": true, 00:24:00.737 "unmap": true, 00:24:00.737 "flush": true, 00:24:00.737 "reset": true, 00:24:00.737 "nvme_admin": false, 00:24:00.737 "nvme_io": false, 00:24:00.737 "nvme_io_md": false, 00:24:00.737 "write_zeroes": true, 00:24:00.737 "zcopy": true, 00:24:00.737 "get_zone_info": false, 00:24:00.737 "zone_management": false, 00:24:00.737 "zone_append": false, 00:24:00.737 "compare": false, 00:24:00.737 "compare_and_write": false, 00:24:00.737 "abort": true, 00:24:00.737 "seek_hole": false, 00:24:00.737 "seek_data": false, 00:24:00.737 "copy": true, 00:24:00.737 "nvme_iov_md": false 00:24:00.737 }, 00:24:00.738 "memory_domains": [ 00:24:00.738 { 00:24:00.738 "dma_device_id": "system", 00:24:00.738 "dma_device_type": 1 00:24:00.738 }, 00:24:00.738 { 00:24:00.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.738 "dma_device_type": 2 00:24:00.738 } 00:24:00.738 ], 00:24:00.738 "driver_specific": {} 00:24:00.738 } 00:24:00.738 ] 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.738 08:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:00.996 08:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.996 "name": "Existed_Raid", 00:24:00.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.996 "strip_size_kb": 64, 00:24:00.996 "state": "configuring", 00:24:00.996 "raid_level": "raid0", 00:24:00.996 "superblock": false, 00:24:00.996 "num_base_bdevs": 4, 00:24:00.996 "num_base_bdevs_discovered": 1, 00:24:00.996 "num_base_bdevs_operational": 4, 00:24:00.996 "base_bdevs_list": [ 00:24:00.996 { 00:24:00.996 "name": "BaseBdev1", 00:24:00.996 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:00.996 "is_configured": true, 00:24:00.996 "data_offset": 0, 00:24:00.996 "data_size": 65536 00:24:00.996 }, 00:24:00.996 { 00:24:00.996 "name": "BaseBdev2", 00:24:00.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.996 "is_configured": false, 00:24:00.996 "data_offset": 0, 00:24:00.996 "data_size": 0 00:24:00.996 }, 00:24:00.996 { 00:24:00.996 "name": "BaseBdev3", 00:24:00.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.996 "is_configured": false, 00:24:00.996 "data_offset": 0, 00:24:00.996 "data_size": 0 00:24:00.996 }, 00:24:00.996 { 00:24:00.996 "name": "BaseBdev4", 00:24:00.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.996 "is_configured": false, 00:24:00.996 "data_offset": 0, 00:24:00.996 "data_size": 0 00:24:00.996 } 00:24:00.996 ] 00:24:00.996 }' 00:24:00.996 08:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.996 08:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.562 08:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:01.820 [2024-07-12 08:50:36.945775] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:01.820 [2024-07-12 08:50:36.945977] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:24:01.820 08:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:02.077 [2024-07-12 08:50:37.185865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:02.077 [2024-07-12 08:50:37.187979] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:02.077 [2024-07-12 08:50:37.188180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:02.077 [2024-07-12 08:50:37.188352] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:02.077 [2024-07-12 08:50:37.188418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:02.077 [2024-07-12 08:50:37.188676] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:02.077 [2024-07-12 08:50:37.188749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:02.077 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:02.077 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.078 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.335 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.335 "name": "Existed_Raid", 00:24:02.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.335 "strip_size_kb": 64, 00:24:02.335 "state": "configuring", 00:24:02.335 "raid_level": "raid0", 00:24:02.335 "superblock": false, 00:24:02.335 "num_base_bdevs": 4, 00:24:02.335 "num_base_bdevs_discovered": 1, 00:24:02.335 "num_base_bdevs_operational": 4, 00:24:02.335 "base_bdevs_list": [ 00:24:02.335 { 00:24:02.335 "name": "BaseBdev1", 00:24:02.335 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:02.335 "is_configured": true, 00:24:02.335 "data_offset": 0, 00:24:02.335 "data_size": 65536 00:24:02.335 }, 00:24:02.335 { 00:24:02.335 "name": "BaseBdev2", 00:24:02.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.335 "is_configured": false, 00:24:02.335 "data_offset": 0, 00:24:02.335 "data_size": 0 00:24:02.335 }, 00:24:02.335 { 00:24:02.335 "name": "BaseBdev3", 00:24:02.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.335 "is_configured": false, 00:24:02.335 "data_offset": 0, 00:24:02.335 "data_size": 0 00:24:02.335 }, 00:24:02.335 { 00:24:02.335 "name": "BaseBdev4", 00:24:02.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.335 "is_configured": false, 00:24:02.335 "data_offset": 0, 00:24:02.335 "data_size": 0 00:24:02.335 } 00:24:02.335 ] 00:24:02.335 }' 00:24:02.335 08:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.335 08:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.901 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:03.158 [2024-07-12 08:50:38.330126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.158 BaseBdev2 00:24:03.158 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:03.158 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:03.158 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:03.158 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:03.158 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:03.158 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:03.158 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:03.415 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:03.673 [ 00:24:03.673 { 00:24:03.673 "name": "BaseBdev2", 00:24:03.673 "aliases": [ 00:24:03.673 "6be8da79-cb84-407b-b741-290cc2a907d9" 00:24:03.673 ], 00:24:03.673 "product_name": "Malloc disk", 00:24:03.673 "block_size": 512, 00:24:03.673 "num_blocks": 65536, 00:24:03.673 "uuid": "6be8da79-cb84-407b-b741-290cc2a907d9", 00:24:03.673 "assigned_rate_limits": { 00:24:03.673 "rw_ios_per_sec": 0, 00:24:03.673 "rw_mbytes_per_sec": 0, 00:24:03.673 "r_mbytes_per_sec": 0, 00:24:03.673 "w_mbytes_per_sec": 0 00:24:03.673 }, 00:24:03.673 "claimed": true, 00:24:03.673 "claim_type": "exclusive_write", 00:24:03.673 "zoned": false, 00:24:03.673 "supported_io_types": { 00:24:03.673 "read": true, 00:24:03.673 "write": true, 00:24:03.673 "unmap": true, 00:24:03.673 "flush": true, 00:24:03.673 "reset": true, 00:24:03.673 "nvme_admin": false, 00:24:03.673 "nvme_io": false, 00:24:03.673 "nvme_io_md": false, 00:24:03.673 "write_zeroes": true, 00:24:03.673 "zcopy": true, 00:24:03.673 "get_zone_info": false, 00:24:03.673 "zone_management": false, 00:24:03.673 "zone_append": false, 00:24:03.673 "compare": false, 00:24:03.673 "compare_and_write": false, 00:24:03.673 "abort": true, 00:24:03.673 "seek_hole": false, 00:24:03.673 "seek_data": false, 00:24:03.673 "copy": true, 00:24:03.673 "nvme_iov_md": false 00:24:03.673 }, 00:24:03.673 "memory_domains": [ 00:24:03.673 { 00:24:03.673 "dma_device_id": "system", 00:24:03.673 "dma_device_type": 1 00:24:03.673 }, 00:24:03.673 { 00:24:03.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.673 "dma_device_type": 2 00:24:03.673 } 00:24:03.673 ], 00:24:03.673 "driver_specific": {} 00:24:03.673 } 00:24:03.673 ] 00:24:03.673 08:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:03.673 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.674 08:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.932 08:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.932 "name": "Existed_Raid", 00:24:03.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.932 "strip_size_kb": 64, 00:24:03.932 "state": "configuring", 00:24:03.932 "raid_level": "raid0", 00:24:03.932 "superblock": false, 00:24:03.932 "num_base_bdevs": 4, 00:24:03.932 "num_base_bdevs_discovered": 2, 00:24:03.932 "num_base_bdevs_operational": 4, 00:24:03.932 "base_bdevs_list": [ 00:24:03.932 { 00:24:03.932 "name": "BaseBdev1", 00:24:03.932 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:03.932 "is_configured": true, 00:24:03.932 "data_offset": 0, 00:24:03.932 "data_size": 65536 00:24:03.932 }, 00:24:03.932 { 00:24:03.932 "name": "BaseBdev2", 00:24:03.932 "uuid": "6be8da79-cb84-407b-b741-290cc2a907d9", 00:24:03.932 "is_configured": true, 00:24:03.932 "data_offset": 0, 00:24:03.932 "data_size": 65536 00:24:03.932 }, 00:24:03.932 { 00:24:03.932 "name": "BaseBdev3", 00:24:03.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.932 "is_configured": false, 00:24:03.932 "data_offset": 0, 00:24:03.932 "data_size": 0 00:24:03.932 }, 00:24:03.932 { 00:24:03.932 "name": "BaseBdev4", 00:24:03.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.932 "is_configured": false, 00:24:03.932 "data_offset": 0, 00:24:03.932 "data_size": 0 00:24:03.932 } 00:24:03.932 ] 00:24:03.932 }' 00:24:03.932 08:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.932 08:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.496 08:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:04.755 [2024-07-12 08:50:39.875751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:04.755 BaseBdev3 00:24:04.755 08:50:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:04.755 08:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:04.755 08:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:04.755 08:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:04.755 08:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:04.755 08:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:04.755 08:50:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:05.012 08:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:05.272 [ 00:24:05.272 { 00:24:05.272 "name": "BaseBdev3", 00:24:05.272 "aliases": [ 00:24:05.272 "a29a7ef5-373a-41b6-956a-89b4382c79a7" 00:24:05.272 ], 00:24:05.272 "product_name": "Malloc disk", 00:24:05.272 "block_size": 512, 00:24:05.272 "num_blocks": 65536, 00:24:05.272 "uuid": "a29a7ef5-373a-41b6-956a-89b4382c79a7", 00:24:05.272 "assigned_rate_limits": { 00:24:05.272 "rw_ios_per_sec": 0, 00:24:05.272 "rw_mbytes_per_sec": 0, 00:24:05.272 "r_mbytes_per_sec": 0, 00:24:05.272 "w_mbytes_per_sec": 0 00:24:05.272 }, 00:24:05.272 "claimed": true, 00:24:05.272 "claim_type": "exclusive_write", 00:24:05.272 "zoned": false, 00:24:05.272 "supported_io_types": { 00:24:05.272 "read": true, 00:24:05.272 "write": true, 00:24:05.272 "unmap": true, 00:24:05.272 "flush": true, 00:24:05.272 "reset": true, 00:24:05.272 "nvme_admin": false, 00:24:05.272 "nvme_io": false, 00:24:05.272 "nvme_io_md": false, 00:24:05.272 "write_zeroes": true, 00:24:05.272 "zcopy": true, 00:24:05.272 "get_zone_info": false, 00:24:05.272 "zone_management": false, 00:24:05.272 "zone_append": false, 00:24:05.272 "compare": false, 00:24:05.272 "compare_and_write": false, 00:24:05.272 "abort": true, 00:24:05.272 "seek_hole": false, 00:24:05.272 "seek_data": false, 00:24:05.272 "copy": true, 00:24:05.272 "nvme_iov_md": false 00:24:05.272 }, 00:24:05.272 "memory_domains": [ 00:24:05.272 { 00:24:05.272 "dma_device_id": "system", 00:24:05.272 "dma_device_type": 1 00:24:05.272 }, 00:24:05.272 { 00:24:05.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.272 "dma_device_type": 2 00:24:05.272 } 00:24:05.272 ], 00:24:05.272 "driver_specific": {} 00:24:05.272 } 00:24:05.272 ] 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.272 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.577 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.577 "name": "Existed_Raid", 00:24:05.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.577 "strip_size_kb": 64, 00:24:05.577 "state": "configuring", 00:24:05.577 "raid_level": "raid0", 00:24:05.577 "superblock": false, 00:24:05.577 "num_base_bdevs": 4, 00:24:05.577 "num_base_bdevs_discovered": 3, 00:24:05.577 "num_base_bdevs_operational": 4, 00:24:05.577 "base_bdevs_list": [ 00:24:05.577 { 00:24:05.577 "name": "BaseBdev1", 00:24:05.577 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:05.577 "is_configured": true, 00:24:05.577 "data_offset": 0, 00:24:05.577 "data_size": 65536 00:24:05.577 }, 00:24:05.577 { 00:24:05.577 "name": "BaseBdev2", 00:24:05.577 "uuid": "6be8da79-cb84-407b-b741-290cc2a907d9", 00:24:05.577 "is_configured": true, 00:24:05.577 "data_offset": 0, 00:24:05.577 "data_size": 65536 00:24:05.577 }, 00:24:05.577 { 00:24:05.577 "name": "BaseBdev3", 00:24:05.577 "uuid": "a29a7ef5-373a-41b6-956a-89b4382c79a7", 00:24:05.577 "is_configured": true, 00:24:05.577 "data_offset": 0, 00:24:05.577 "data_size": 65536 00:24:05.577 }, 00:24:05.577 { 00:24:05.577 "name": "BaseBdev4", 00:24:05.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.577 "is_configured": false, 00:24:05.577 "data_offset": 0, 00:24:05.577 "data_size": 0 00:24:05.577 } 00:24:05.577 ] 00:24:05.577 }' 00:24:05.577 08:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.577 08:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.142 08:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:06.400 [2024-07-12 08:50:41.550174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:06.400 [2024-07-12 08:50:41.550256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:24:06.400 [2024-07-12 08:50:41.550271] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:06.400 [2024-07-12 08:50:41.550412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:06.400 [2024-07-12 08:50:41.550818] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:24:06.400 [2024-07-12 08:50:41.550843] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:24:06.400 [2024-07-12 08:50:41.551111] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.400 BaseBdev4 00:24:06.400 08:50:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:06.400 08:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:06.400 08:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:06.400 08:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:06.400 08:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:06.400 08:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:06.400 08:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.659 08:50:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:06.918 [ 00:24:06.918 { 00:24:06.918 "name": "BaseBdev4", 00:24:06.918 "aliases": [ 00:24:06.918 "9ebd713a-b22c-4b03-9046-3677bc07b7ec" 00:24:06.918 ], 00:24:06.918 "product_name": "Malloc disk", 00:24:06.918 "block_size": 512, 00:24:06.918 "num_blocks": 65536, 00:24:06.918 "uuid": "9ebd713a-b22c-4b03-9046-3677bc07b7ec", 00:24:06.918 "assigned_rate_limits": { 00:24:06.918 "rw_ios_per_sec": 0, 00:24:06.918 "rw_mbytes_per_sec": 0, 00:24:06.918 "r_mbytes_per_sec": 0, 00:24:06.918 "w_mbytes_per_sec": 0 00:24:06.918 }, 00:24:06.918 "claimed": true, 00:24:06.918 "claim_type": "exclusive_write", 00:24:06.918 "zoned": false, 00:24:06.918 "supported_io_types": { 00:24:06.918 "read": true, 00:24:06.918 "write": true, 00:24:06.918 "unmap": true, 00:24:06.918 "flush": true, 00:24:06.918 "reset": true, 00:24:06.918 "nvme_admin": false, 00:24:06.918 "nvme_io": false, 00:24:06.918 "nvme_io_md": false, 00:24:06.918 "write_zeroes": true, 00:24:06.918 "zcopy": true, 00:24:06.918 "get_zone_info": false, 00:24:06.918 "zone_management": false, 00:24:06.918 "zone_append": false, 00:24:06.918 "compare": false, 00:24:06.918 "compare_and_write": false, 00:24:06.918 "abort": true, 00:24:06.918 "seek_hole": false, 00:24:06.918 "seek_data": false, 00:24:06.918 "copy": true, 00:24:06.918 "nvme_iov_md": false 00:24:06.918 }, 00:24:06.918 "memory_domains": [ 00:24:06.918 { 00:24:06.918 "dma_device_id": "system", 00:24:06.918 "dma_device_type": 1 00:24:06.918 }, 00:24:06.918 { 00:24:06.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.918 "dma_device_type": 2 00:24:06.918 } 00:24:06.918 ], 00:24:06.918 "driver_specific": {} 00:24:06.918 } 00:24:06.918 ] 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.918 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.177 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.177 "name": "Existed_Raid", 00:24:07.177 "uuid": "5b18d2b4-370d-4327-8cf6-a45fd500678c", 00:24:07.177 "strip_size_kb": 64, 00:24:07.177 "state": "online", 00:24:07.177 "raid_level": "raid0", 00:24:07.177 "superblock": false, 00:24:07.177 "num_base_bdevs": 4, 00:24:07.177 "num_base_bdevs_discovered": 4, 00:24:07.177 "num_base_bdevs_operational": 4, 00:24:07.177 "base_bdevs_list": [ 00:24:07.177 { 00:24:07.177 "name": "BaseBdev1", 00:24:07.177 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:07.177 "is_configured": true, 00:24:07.177 "data_offset": 0, 00:24:07.177 "data_size": 65536 00:24:07.177 }, 00:24:07.177 { 00:24:07.177 "name": "BaseBdev2", 00:24:07.177 "uuid": "6be8da79-cb84-407b-b741-290cc2a907d9", 00:24:07.177 "is_configured": true, 00:24:07.177 "data_offset": 0, 00:24:07.177 "data_size": 65536 00:24:07.177 }, 00:24:07.177 { 00:24:07.177 "name": "BaseBdev3", 00:24:07.177 "uuid": "a29a7ef5-373a-41b6-956a-89b4382c79a7", 00:24:07.177 "is_configured": true, 00:24:07.177 "data_offset": 0, 00:24:07.177 "data_size": 65536 00:24:07.177 }, 00:24:07.177 { 00:24:07.177 "name": "BaseBdev4", 00:24:07.177 "uuid": "9ebd713a-b22c-4b03-9046-3677bc07b7ec", 00:24:07.177 "is_configured": true, 00:24:07.178 "data_offset": 0, 00:24:07.178 "data_size": 65536 00:24:07.178 } 00:24:07.178 ] 00:24:07.178 }' 00:24:07.178 08:50:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.178 08:50:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:08.110 [2024-07-12 08:50:43.222960] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:08.110 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:08.110 "name": "Existed_Raid", 00:24:08.110 "aliases": [ 00:24:08.110 "5b18d2b4-370d-4327-8cf6-a45fd500678c" 00:24:08.110 ], 00:24:08.110 "product_name": "Raid Volume", 00:24:08.110 "block_size": 512, 00:24:08.110 "num_blocks": 262144, 00:24:08.110 "uuid": "5b18d2b4-370d-4327-8cf6-a45fd500678c", 00:24:08.110 "assigned_rate_limits": { 00:24:08.110 "rw_ios_per_sec": 0, 00:24:08.110 "rw_mbytes_per_sec": 0, 00:24:08.110 "r_mbytes_per_sec": 0, 00:24:08.110 "w_mbytes_per_sec": 0 00:24:08.110 }, 00:24:08.110 "claimed": false, 00:24:08.110 "zoned": false, 00:24:08.110 "supported_io_types": { 00:24:08.110 "read": true, 00:24:08.110 "write": true, 00:24:08.110 "unmap": true, 00:24:08.110 "flush": true, 00:24:08.110 "reset": true, 00:24:08.110 "nvme_admin": false, 00:24:08.110 "nvme_io": false, 00:24:08.110 "nvme_io_md": false, 00:24:08.110 "write_zeroes": true, 00:24:08.110 "zcopy": false, 00:24:08.110 "get_zone_info": false, 00:24:08.110 "zone_management": false, 00:24:08.110 "zone_append": false, 00:24:08.110 "compare": false, 00:24:08.110 "compare_and_write": false, 00:24:08.110 "abort": false, 00:24:08.110 "seek_hole": false, 00:24:08.110 "seek_data": false, 00:24:08.110 "copy": false, 00:24:08.110 "nvme_iov_md": false 00:24:08.110 }, 00:24:08.110 "memory_domains": [ 00:24:08.110 { 00:24:08.110 "dma_device_id": "system", 00:24:08.110 "dma_device_type": 1 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.110 "dma_device_type": 2 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "dma_device_id": "system", 00:24:08.110 "dma_device_type": 1 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.110 "dma_device_type": 2 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "dma_device_id": "system", 00:24:08.110 "dma_device_type": 1 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.110 "dma_device_type": 2 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "dma_device_id": "system", 00:24:08.110 "dma_device_type": 1 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.110 "dma_device_type": 2 00:24:08.110 } 00:24:08.110 ], 00:24:08.110 "driver_specific": { 00:24:08.110 "raid": { 00:24:08.110 "uuid": "5b18d2b4-370d-4327-8cf6-a45fd500678c", 00:24:08.110 "strip_size_kb": 64, 00:24:08.110 "state": "online", 00:24:08.110 "raid_level": "raid0", 00:24:08.110 "superblock": false, 00:24:08.110 "num_base_bdevs": 4, 00:24:08.110 "num_base_bdevs_discovered": 4, 00:24:08.110 "num_base_bdevs_operational": 4, 00:24:08.110 "base_bdevs_list": [ 00:24:08.110 { 00:24:08.110 "name": "BaseBdev1", 00:24:08.110 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:08.110 "is_configured": true, 00:24:08.110 "data_offset": 0, 00:24:08.110 "data_size": 65536 00:24:08.110 }, 00:24:08.110 { 00:24:08.110 "name": "BaseBdev2", 00:24:08.110 "uuid": "6be8da79-cb84-407b-b741-290cc2a907d9", 00:24:08.110 "is_configured": true, 00:24:08.110 "data_offset": 0, 00:24:08.110 "data_size": 65536 00:24:08.111 }, 00:24:08.111 { 00:24:08.111 "name": "BaseBdev3", 00:24:08.111 "uuid": "a29a7ef5-373a-41b6-956a-89b4382c79a7", 00:24:08.111 "is_configured": true, 00:24:08.111 "data_offset": 0, 00:24:08.111 "data_size": 65536 00:24:08.111 }, 00:24:08.111 { 00:24:08.111 "name": "BaseBdev4", 00:24:08.111 "uuid": "9ebd713a-b22c-4b03-9046-3677bc07b7ec", 00:24:08.111 "is_configured": true, 00:24:08.111 "data_offset": 0, 00:24:08.111 "data_size": 65536 00:24:08.111 } 00:24:08.111 ] 00:24:08.111 } 00:24:08.111 } 00:24:08.111 }' 00:24:08.111 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:08.111 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:08.111 BaseBdev2 00:24:08.111 BaseBdev3 00:24:08.111 BaseBdev4' 00:24:08.111 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:08.111 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:08.111 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:08.369 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:08.369 "name": "BaseBdev1", 00:24:08.369 "aliases": [ 00:24:08.369 "1441f3d0-493c-43a8-a593-0e1c756407e6" 00:24:08.369 ], 00:24:08.369 "product_name": "Malloc disk", 00:24:08.369 "block_size": 512, 00:24:08.369 "num_blocks": 65536, 00:24:08.369 "uuid": "1441f3d0-493c-43a8-a593-0e1c756407e6", 00:24:08.369 "assigned_rate_limits": { 00:24:08.369 "rw_ios_per_sec": 0, 00:24:08.369 "rw_mbytes_per_sec": 0, 00:24:08.369 "r_mbytes_per_sec": 0, 00:24:08.369 "w_mbytes_per_sec": 0 00:24:08.369 }, 00:24:08.369 "claimed": true, 00:24:08.369 "claim_type": "exclusive_write", 00:24:08.369 "zoned": false, 00:24:08.369 "supported_io_types": { 00:24:08.369 "read": true, 00:24:08.369 "write": true, 00:24:08.369 "unmap": true, 00:24:08.369 "flush": true, 00:24:08.369 "reset": true, 00:24:08.369 "nvme_admin": false, 00:24:08.369 "nvme_io": false, 00:24:08.369 "nvme_io_md": false, 00:24:08.369 "write_zeroes": true, 00:24:08.369 "zcopy": true, 00:24:08.369 "get_zone_info": false, 00:24:08.369 "zone_management": false, 00:24:08.369 "zone_append": false, 00:24:08.369 "compare": false, 00:24:08.369 "compare_and_write": false, 00:24:08.369 "abort": true, 00:24:08.369 "seek_hole": false, 00:24:08.369 "seek_data": false, 00:24:08.369 "copy": true, 00:24:08.369 "nvme_iov_md": false 00:24:08.369 }, 00:24:08.369 "memory_domains": [ 00:24:08.369 { 00:24:08.369 "dma_device_id": "system", 00:24:08.369 "dma_device_type": 1 00:24:08.369 }, 00:24:08.369 { 00:24:08.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.369 "dma_device_type": 2 00:24:08.369 } 00:24:08.369 ], 00:24:08.369 "driver_specific": {} 00:24:08.369 }' 00:24:08.369 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.628 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.628 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:08.628 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.628 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.628 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:08.628 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.628 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.886 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:08.886 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.886 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.886 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:08.886 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:08.886 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:08.886 08:50:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:09.145 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:09.145 "name": "BaseBdev2", 00:24:09.145 "aliases": [ 00:24:09.145 "6be8da79-cb84-407b-b741-290cc2a907d9" 00:24:09.145 ], 00:24:09.145 "product_name": "Malloc disk", 00:24:09.145 "block_size": 512, 00:24:09.145 "num_blocks": 65536, 00:24:09.145 "uuid": "6be8da79-cb84-407b-b741-290cc2a907d9", 00:24:09.145 "assigned_rate_limits": { 00:24:09.145 "rw_ios_per_sec": 0, 00:24:09.145 "rw_mbytes_per_sec": 0, 00:24:09.145 "r_mbytes_per_sec": 0, 00:24:09.145 "w_mbytes_per_sec": 0 00:24:09.145 }, 00:24:09.145 "claimed": true, 00:24:09.145 "claim_type": "exclusive_write", 00:24:09.145 "zoned": false, 00:24:09.145 "supported_io_types": { 00:24:09.145 "read": true, 00:24:09.145 "write": true, 00:24:09.145 "unmap": true, 00:24:09.145 "flush": true, 00:24:09.145 "reset": true, 00:24:09.145 "nvme_admin": false, 00:24:09.145 "nvme_io": false, 00:24:09.145 "nvme_io_md": false, 00:24:09.145 "write_zeroes": true, 00:24:09.145 "zcopy": true, 00:24:09.145 "get_zone_info": false, 00:24:09.145 "zone_management": false, 00:24:09.145 "zone_append": false, 00:24:09.145 "compare": false, 00:24:09.145 "compare_and_write": false, 00:24:09.145 "abort": true, 00:24:09.145 "seek_hole": false, 00:24:09.145 "seek_data": false, 00:24:09.145 "copy": true, 00:24:09.145 "nvme_iov_md": false 00:24:09.145 }, 00:24:09.145 "memory_domains": [ 00:24:09.145 { 00:24:09.145 "dma_device_id": "system", 00:24:09.145 "dma_device_type": 1 00:24:09.145 }, 00:24:09.145 { 00:24:09.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.145 "dma_device_type": 2 00:24:09.145 } 00:24:09.145 ], 00:24:09.145 "driver_specific": {} 00:24:09.145 }' 00:24:09.145 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.145 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.145 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:09.145 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.403 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.403 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:09.403 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.403 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.403 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:09.403 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.661 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.661 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:09.661 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:09.661 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:09.661 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:09.920 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:09.920 "name": "BaseBdev3", 00:24:09.920 "aliases": [ 00:24:09.920 "a29a7ef5-373a-41b6-956a-89b4382c79a7" 00:24:09.920 ], 00:24:09.920 "product_name": "Malloc disk", 00:24:09.920 "block_size": 512, 00:24:09.920 "num_blocks": 65536, 00:24:09.920 "uuid": "a29a7ef5-373a-41b6-956a-89b4382c79a7", 00:24:09.920 "assigned_rate_limits": { 00:24:09.920 "rw_ios_per_sec": 0, 00:24:09.920 "rw_mbytes_per_sec": 0, 00:24:09.920 "r_mbytes_per_sec": 0, 00:24:09.920 "w_mbytes_per_sec": 0 00:24:09.920 }, 00:24:09.920 "claimed": true, 00:24:09.920 "claim_type": "exclusive_write", 00:24:09.920 "zoned": false, 00:24:09.920 "supported_io_types": { 00:24:09.920 "read": true, 00:24:09.920 "write": true, 00:24:09.920 "unmap": true, 00:24:09.920 "flush": true, 00:24:09.920 "reset": true, 00:24:09.920 "nvme_admin": false, 00:24:09.920 "nvme_io": false, 00:24:09.920 "nvme_io_md": false, 00:24:09.920 "write_zeroes": true, 00:24:09.920 "zcopy": true, 00:24:09.920 "get_zone_info": false, 00:24:09.921 "zone_management": false, 00:24:09.921 "zone_append": false, 00:24:09.921 "compare": false, 00:24:09.921 "compare_and_write": false, 00:24:09.921 "abort": true, 00:24:09.921 "seek_hole": false, 00:24:09.921 "seek_data": false, 00:24:09.921 "copy": true, 00:24:09.921 "nvme_iov_md": false 00:24:09.921 }, 00:24:09.921 "memory_domains": [ 00:24:09.921 { 00:24:09.921 "dma_device_id": "system", 00:24:09.921 "dma_device_type": 1 00:24:09.921 }, 00:24:09.921 { 00:24:09.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.921 "dma_device_type": 2 00:24:09.921 } 00:24:09.921 ], 00:24:09.921 "driver_specific": {} 00:24:09.921 }' 00:24:09.921 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.921 08:50:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:09.921 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:09.921 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:09.921 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:10.180 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:10.439 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:10.439 "name": "BaseBdev4", 00:24:10.439 "aliases": [ 00:24:10.439 "9ebd713a-b22c-4b03-9046-3677bc07b7ec" 00:24:10.439 ], 00:24:10.439 "product_name": "Malloc disk", 00:24:10.439 "block_size": 512, 00:24:10.439 "num_blocks": 65536, 00:24:10.439 "uuid": "9ebd713a-b22c-4b03-9046-3677bc07b7ec", 00:24:10.439 "assigned_rate_limits": { 00:24:10.439 "rw_ios_per_sec": 0, 00:24:10.439 "rw_mbytes_per_sec": 0, 00:24:10.439 "r_mbytes_per_sec": 0, 00:24:10.439 "w_mbytes_per_sec": 0 00:24:10.439 }, 00:24:10.439 "claimed": true, 00:24:10.439 "claim_type": "exclusive_write", 00:24:10.439 "zoned": false, 00:24:10.439 "supported_io_types": { 00:24:10.439 "read": true, 00:24:10.439 "write": true, 00:24:10.439 "unmap": true, 00:24:10.439 "flush": true, 00:24:10.439 "reset": true, 00:24:10.439 "nvme_admin": false, 00:24:10.439 "nvme_io": false, 00:24:10.439 "nvme_io_md": false, 00:24:10.439 "write_zeroes": true, 00:24:10.439 "zcopy": true, 00:24:10.439 "get_zone_info": false, 00:24:10.439 "zone_management": false, 00:24:10.439 "zone_append": false, 00:24:10.439 "compare": false, 00:24:10.439 "compare_and_write": false, 00:24:10.439 "abort": true, 00:24:10.439 "seek_hole": false, 00:24:10.439 "seek_data": false, 00:24:10.439 "copy": true, 00:24:10.439 "nvme_iov_md": false 00:24:10.439 }, 00:24:10.439 "memory_domains": [ 00:24:10.439 { 00:24:10.439 "dma_device_id": "system", 00:24:10.439 "dma_device_type": 1 00:24:10.439 }, 00:24:10.439 { 00:24:10.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.439 "dma_device_type": 2 00:24:10.439 } 00:24:10.439 ], 00:24:10.439 "driver_specific": {} 00:24:10.439 }' 00:24:10.439 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.698 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:10.698 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:10.698 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.698 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:10.698 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:10.698 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.956 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:10.956 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:10.956 08:50:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.956 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:10.956 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:10.956 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:11.213 [2024-07-12 08:50:46.275533] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:11.213 [2024-07-12 08:50:46.275570] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:11.213 [2024-07-12 08:50:46.275644] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.213 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.471 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.471 "name": "Existed_Raid", 00:24:11.471 "uuid": "5b18d2b4-370d-4327-8cf6-a45fd500678c", 00:24:11.471 "strip_size_kb": 64, 00:24:11.471 "state": "offline", 00:24:11.471 "raid_level": "raid0", 00:24:11.471 "superblock": false, 00:24:11.471 "num_base_bdevs": 4, 00:24:11.471 "num_base_bdevs_discovered": 3, 00:24:11.471 "num_base_bdevs_operational": 3, 00:24:11.471 "base_bdevs_list": [ 00:24:11.471 { 00:24:11.471 "name": null, 00:24:11.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.471 "is_configured": false, 00:24:11.471 "data_offset": 0, 00:24:11.471 "data_size": 65536 00:24:11.471 }, 00:24:11.471 { 00:24:11.471 "name": "BaseBdev2", 00:24:11.471 "uuid": "6be8da79-cb84-407b-b741-290cc2a907d9", 00:24:11.471 "is_configured": true, 00:24:11.471 "data_offset": 0, 00:24:11.471 "data_size": 65536 00:24:11.471 }, 00:24:11.471 { 00:24:11.471 "name": "BaseBdev3", 00:24:11.471 "uuid": "a29a7ef5-373a-41b6-956a-89b4382c79a7", 00:24:11.471 "is_configured": true, 00:24:11.471 "data_offset": 0, 00:24:11.471 "data_size": 65536 00:24:11.471 }, 00:24:11.471 { 00:24:11.471 "name": "BaseBdev4", 00:24:11.471 "uuid": "9ebd713a-b22c-4b03-9046-3677bc07b7ec", 00:24:11.471 "is_configured": true, 00:24:11.471 "data_offset": 0, 00:24:11.471 "data_size": 65536 00:24:11.471 } 00:24:11.471 ] 00:24:11.471 }' 00:24:11.471 08:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.471 08:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.453 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:12.453 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:12.453 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.453 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:12.453 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:12.453 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:12.453 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:12.712 [2024-07-12 08:50:47.830273] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:12.972 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:12.972 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:12.972 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.972 08:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:12.972 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:12.972 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:12.972 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:13.231 [2024-07-12 08:50:48.366505] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:13.489 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:13.489 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:13.489 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.489 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:13.748 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:13.748 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:13.748 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:13.748 [2024-07-12 08:50:48.893060] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:13.748 [2024-07-12 08:50:48.893144] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:24:14.007 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:14.007 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:14.007 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.007 08:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:14.266 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:14.266 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:14.266 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:14.266 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:14.266 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:14.266 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:14.525 BaseBdev2 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:14.525 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:14.783 [ 00:24:14.783 { 00:24:14.783 "name": "BaseBdev2", 00:24:14.783 "aliases": [ 00:24:14.784 "4d12f69c-9bbe-47f9-a357-e8a3117e91e8" 00:24:14.784 ], 00:24:14.784 "product_name": "Malloc disk", 00:24:14.784 "block_size": 512, 00:24:14.784 "num_blocks": 65536, 00:24:14.784 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:14.784 "assigned_rate_limits": { 00:24:14.784 "rw_ios_per_sec": 0, 00:24:14.784 "rw_mbytes_per_sec": 0, 00:24:14.784 "r_mbytes_per_sec": 0, 00:24:14.784 "w_mbytes_per_sec": 0 00:24:14.784 }, 00:24:14.784 "claimed": false, 00:24:14.784 "zoned": false, 00:24:14.784 "supported_io_types": { 00:24:14.784 "read": true, 00:24:14.784 "write": true, 00:24:14.784 "unmap": true, 00:24:14.784 "flush": true, 00:24:14.784 "reset": true, 00:24:14.784 "nvme_admin": false, 00:24:14.784 "nvme_io": false, 00:24:14.784 "nvme_io_md": false, 00:24:14.784 "write_zeroes": true, 00:24:14.784 "zcopy": true, 00:24:14.784 "get_zone_info": false, 00:24:14.784 "zone_management": false, 00:24:14.784 "zone_append": false, 00:24:14.784 "compare": false, 00:24:14.784 "compare_and_write": false, 00:24:14.784 "abort": true, 00:24:14.784 "seek_hole": false, 00:24:14.784 "seek_data": false, 00:24:14.784 "copy": true, 00:24:14.784 "nvme_iov_md": false 00:24:14.784 }, 00:24:14.784 "memory_domains": [ 00:24:14.784 { 00:24:14.784 "dma_device_id": "system", 00:24:14.784 "dma_device_type": 1 00:24:14.784 }, 00:24:14.784 { 00:24:14.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.784 "dma_device_type": 2 00:24:14.784 } 00:24:14.784 ], 00:24:14.784 "driver_specific": {} 00:24:14.784 } 00:24:14.784 ] 00:24:14.784 08:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:14.784 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:14.784 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:14.784 08:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:15.042 BaseBdev3 00:24:15.042 08:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:15.042 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:15.042 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:15.042 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:15.042 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:15.042 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:15.042 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:15.300 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:15.558 [ 00:24:15.558 { 00:24:15.558 "name": "BaseBdev3", 00:24:15.559 "aliases": [ 00:24:15.559 "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e" 00:24:15.559 ], 00:24:15.559 "product_name": "Malloc disk", 00:24:15.559 "block_size": 512, 00:24:15.559 "num_blocks": 65536, 00:24:15.559 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:15.559 "assigned_rate_limits": { 00:24:15.559 "rw_ios_per_sec": 0, 00:24:15.559 "rw_mbytes_per_sec": 0, 00:24:15.559 "r_mbytes_per_sec": 0, 00:24:15.559 "w_mbytes_per_sec": 0 00:24:15.559 }, 00:24:15.559 "claimed": false, 00:24:15.559 "zoned": false, 00:24:15.559 "supported_io_types": { 00:24:15.559 "read": true, 00:24:15.559 "write": true, 00:24:15.559 "unmap": true, 00:24:15.559 "flush": true, 00:24:15.559 "reset": true, 00:24:15.559 "nvme_admin": false, 00:24:15.559 "nvme_io": false, 00:24:15.559 "nvme_io_md": false, 00:24:15.559 "write_zeroes": true, 00:24:15.559 "zcopy": true, 00:24:15.559 "get_zone_info": false, 00:24:15.559 "zone_management": false, 00:24:15.559 "zone_append": false, 00:24:15.559 "compare": false, 00:24:15.559 "compare_and_write": false, 00:24:15.559 "abort": true, 00:24:15.559 "seek_hole": false, 00:24:15.559 "seek_data": false, 00:24:15.559 "copy": true, 00:24:15.559 "nvme_iov_md": false 00:24:15.559 }, 00:24:15.559 "memory_domains": [ 00:24:15.559 { 00:24:15.559 "dma_device_id": "system", 00:24:15.559 "dma_device_type": 1 00:24:15.559 }, 00:24:15.559 { 00:24:15.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.559 "dma_device_type": 2 00:24:15.559 } 00:24:15.559 ], 00:24:15.559 "driver_specific": {} 00:24:15.559 } 00:24:15.559 ] 00:24:15.559 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:15.559 08:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:15.559 08:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:15.559 08:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:15.818 BaseBdev4 00:24:15.818 08:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:15.818 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:15.818 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:15.818 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:15.818 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:15.818 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:15.818 08:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:16.076 08:50:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:16.076 [ 00:24:16.076 { 00:24:16.076 "name": "BaseBdev4", 00:24:16.076 "aliases": [ 00:24:16.076 "0097ba4c-7d0f-40a9-8c94-800b9351a01f" 00:24:16.076 ], 00:24:16.076 "product_name": "Malloc disk", 00:24:16.076 "block_size": 512, 00:24:16.076 "num_blocks": 65536, 00:24:16.076 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:16.076 "assigned_rate_limits": { 00:24:16.076 "rw_ios_per_sec": 0, 00:24:16.076 "rw_mbytes_per_sec": 0, 00:24:16.076 "r_mbytes_per_sec": 0, 00:24:16.076 "w_mbytes_per_sec": 0 00:24:16.076 }, 00:24:16.076 "claimed": false, 00:24:16.076 "zoned": false, 00:24:16.076 "supported_io_types": { 00:24:16.076 "read": true, 00:24:16.076 "write": true, 00:24:16.076 "unmap": true, 00:24:16.076 "flush": true, 00:24:16.076 "reset": true, 00:24:16.076 "nvme_admin": false, 00:24:16.076 "nvme_io": false, 00:24:16.076 "nvme_io_md": false, 00:24:16.076 "write_zeroes": true, 00:24:16.076 "zcopy": true, 00:24:16.076 "get_zone_info": false, 00:24:16.076 "zone_management": false, 00:24:16.076 "zone_append": false, 00:24:16.076 "compare": false, 00:24:16.076 "compare_and_write": false, 00:24:16.076 "abort": true, 00:24:16.076 "seek_hole": false, 00:24:16.076 "seek_data": false, 00:24:16.076 "copy": true, 00:24:16.076 "nvme_iov_md": false 00:24:16.076 }, 00:24:16.076 "memory_domains": [ 00:24:16.076 { 00:24:16.076 "dma_device_id": "system", 00:24:16.076 "dma_device_type": 1 00:24:16.076 }, 00:24:16.076 { 00:24:16.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.076 "dma_device_type": 2 00:24:16.076 } 00:24:16.076 ], 00:24:16.076 "driver_specific": {} 00:24:16.076 } 00:24:16.076 ] 00:24:16.334 08:50:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:16.334 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:16.334 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:16.334 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:16.334 [2024-07-12 08:50:51.504870] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:16.334 [2024-07-12 08:50:51.504965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:16.334 [2024-07-12 08:50:51.505012] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:16.334 [2024-07-12 08:50:51.507189] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:16.334 [2024-07-12 08:50:51.507294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.335 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.593 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.593 "name": "Existed_Raid", 00:24:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.593 "strip_size_kb": 64, 00:24:16.593 "state": "configuring", 00:24:16.593 "raid_level": "raid0", 00:24:16.593 "superblock": false, 00:24:16.593 "num_base_bdevs": 4, 00:24:16.593 "num_base_bdevs_discovered": 3, 00:24:16.593 "num_base_bdevs_operational": 4, 00:24:16.593 "base_bdevs_list": [ 00:24:16.593 { 00:24:16.593 "name": "BaseBdev1", 00:24:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.593 "is_configured": false, 00:24:16.593 "data_offset": 0, 00:24:16.593 "data_size": 0 00:24:16.593 }, 00:24:16.593 { 00:24:16.593 "name": "BaseBdev2", 00:24:16.593 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:16.593 "is_configured": true, 00:24:16.593 "data_offset": 0, 00:24:16.593 "data_size": 65536 00:24:16.593 }, 00:24:16.593 { 00:24:16.593 "name": "BaseBdev3", 00:24:16.593 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:16.593 "is_configured": true, 00:24:16.593 "data_offset": 0, 00:24:16.593 "data_size": 65536 00:24:16.593 }, 00:24:16.593 { 00:24:16.593 "name": "BaseBdev4", 00:24:16.593 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:16.593 "is_configured": true, 00:24:16.593 "data_offset": 0, 00:24:16.593 "data_size": 65536 00:24:16.593 } 00:24:16.593 ] 00:24:16.593 }' 00:24:16.593 08:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.593 08:50:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:17.529 [2024-07-12 08:50:52.661180] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.529 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.787 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.787 "name": "Existed_Raid", 00:24:17.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.787 "strip_size_kb": 64, 00:24:17.787 "state": "configuring", 00:24:17.787 "raid_level": "raid0", 00:24:17.787 "superblock": false, 00:24:17.787 "num_base_bdevs": 4, 00:24:17.787 "num_base_bdevs_discovered": 2, 00:24:17.787 "num_base_bdevs_operational": 4, 00:24:17.787 "base_bdevs_list": [ 00:24:17.787 { 00:24:17.787 "name": "BaseBdev1", 00:24:17.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.787 "is_configured": false, 00:24:17.787 "data_offset": 0, 00:24:17.787 "data_size": 0 00:24:17.787 }, 00:24:17.787 { 00:24:17.787 "name": null, 00:24:17.787 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:17.787 "is_configured": false, 00:24:17.787 "data_offset": 0, 00:24:17.787 "data_size": 65536 00:24:17.787 }, 00:24:17.787 { 00:24:17.787 "name": "BaseBdev3", 00:24:17.787 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:17.787 "is_configured": true, 00:24:17.787 "data_offset": 0, 00:24:17.787 "data_size": 65536 00:24:17.787 }, 00:24:17.787 { 00:24:17.787 "name": "BaseBdev4", 00:24:17.787 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:17.787 "is_configured": true, 00:24:17.787 "data_offset": 0, 00:24:17.787 "data_size": 65536 00:24:17.787 } 00:24:17.787 ] 00:24:17.787 }' 00:24:17.787 08:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.787 08:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.357 08:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.357 08:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:18.613 08:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:18.613 08:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:18.871 [2024-07-12 08:50:53.980905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:18.871 BaseBdev1 00:24:18.871 08:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:18.871 08:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:18.871 08:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:18.871 08:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:18.871 08:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:18.871 08:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:18.871 08:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:19.129 08:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:19.388 [ 00:24:19.388 { 00:24:19.388 "name": "BaseBdev1", 00:24:19.388 "aliases": [ 00:24:19.388 "40eca429-bbfd-4d41-a6db-f136687c1c7b" 00:24:19.388 ], 00:24:19.388 "product_name": "Malloc disk", 00:24:19.388 "block_size": 512, 00:24:19.388 "num_blocks": 65536, 00:24:19.388 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:19.388 "assigned_rate_limits": { 00:24:19.388 "rw_ios_per_sec": 0, 00:24:19.388 "rw_mbytes_per_sec": 0, 00:24:19.388 "r_mbytes_per_sec": 0, 00:24:19.388 "w_mbytes_per_sec": 0 00:24:19.388 }, 00:24:19.388 "claimed": true, 00:24:19.388 "claim_type": "exclusive_write", 00:24:19.388 "zoned": false, 00:24:19.388 "supported_io_types": { 00:24:19.388 "read": true, 00:24:19.388 "write": true, 00:24:19.388 "unmap": true, 00:24:19.388 "flush": true, 00:24:19.388 "reset": true, 00:24:19.388 "nvme_admin": false, 00:24:19.388 "nvme_io": false, 00:24:19.388 "nvme_io_md": false, 00:24:19.388 "write_zeroes": true, 00:24:19.388 "zcopy": true, 00:24:19.388 "get_zone_info": false, 00:24:19.388 "zone_management": false, 00:24:19.388 "zone_append": false, 00:24:19.388 "compare": false, 00:24:19.388 "compare_and_write": false, 00:24:19.388 "abort": true, 00:24:19.388 "seek_hole": false, 00:24:19.388 "seek_data": false, 00:24:19.388 "copy": true, 00:24:19.388 "nvme_iov_md": false 00:24:19.388 }, 00:24:19.388 "memory_domains": [ 00:24:19.388 { 00:24:19.388 "dma_device_id": "system", 00:24:19.388 "dma_device_type": 1 00:24:19.388 }, 00:24:19.388 { 00:24:19.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.388 "dma_device_type": 2 00:24:19.388 } 00:24:19.388 ], 00:24:19.388 "driver_specific": {} 00:24:19.388 } 00:24:19.388 ] 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.388 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.646 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.647 "name": "Existed_Raid", 00:24:19.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.647 "strip_size_kb": 64, 00:24:19.647 "state": "configuring", 00:24:19.647 "raid_level": "raid0", 00:24:19.647 "superblock": false, 00:24:19.647 "num_base_bdevs": 4, 00:24:19.647 "num_base_bdevs_discovered": 3, 00:24:19.647 "num_base_bdevs_operational": 4, 00:24:19.647 "base_bdevs_list": [ 00:24:19.647 { 00:24:19.647 "name": "BaseBdev1", 00:24:19.647 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:19.647 "is_configured": true, 00:24:19.647 "data_offset": 0, 00:24:19.647 "data_size": 65536 00:24:19.647 }, 00:24:19.647 { 00:24:19.647 "name": null, 00:24:19.647 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:19.647 "is_configured": false, 00:24:19.647 "data_offset": 0, 00:24:19.647 "data_size": 65536 00:24:19.647 }, 00:24:19.647 { 00:24:19.647 "name": "BaseBdev3", 00:24:19.647 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:19.647 "is_configured": true, 00:24:19.647 "data_offset": 0, 00:24:19.647 "data_size": 65536 00:24:19.647 }, 00:24:19.647 { 00:24:19.647 "name": "BaseBdev4", 00:24:19.647 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:19.647 "is_configured": true, 00:24:19.647 "data_offset": 0, 00:24:19.647 "data_size": 65536 00:24:19.647 } 00:24:19.647 ] 00:24:19.647 }' 00:24:19.647 08:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.647 08:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.212 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.212 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:20.471 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:20.471 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:20.729 [2024-07-12 08:50:55.797409] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.729 08:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.987 08:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.987 "name": "Existed_Raid", 00:24:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.987 "strip_size_kb": 64, 00:24:20.987 "state": "configuring", 00:24:20.987 "raid_level": "raid0", 00:24:20.987 "superblock": false, 00:24:20.987 "num_base_bdevs": 4, 00:24:20.987 "num_base_bdevs_discovered": 2, 00:24:20.987 "num_base_bdevs_operational": 4, 00:24:20.987 "base_bdevs_list": [ 00:24:20.987 { 00:24:20.987 "name": "BaseBdev1", 00:24:20.987 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:20.987 "is_configured": true, 00:24:20.987 "data_offset": 0, 00:24:20.987 "data_size": 65536 00:24:20.987 }, 00:24:20.987 { 00:24:20.987 "name": null, 00:24:20.987 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:20.987 "is_configured": false, 00:24:20.987 "data_offset": 0, 00:24:20.987 "data_size": 65536 00:24:20.987 }, 00:24:20.987 { 00:24:20.987 "name": null, 00:24:20.987 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:20.987 "is_configured": false, 00:24:20.987 "data_offset": 0, 00:24:20.987 "data_size": 65536 00:24:20.987 }, 00:24:20.987 { 00:24:20.987 "name": "BaseBdev4", 00:24:20.987 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:20.987 "is_configured": true, 00:24:20.987 "data_offset": 0, 00:24:20.987 "data_size": 65536 00:24:20.987 } 00:24:20.987 ] 00:24:20.987 }' 00:24:20.987 08:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.987 08:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.921 08:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.921 08:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:21.921 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:21.921 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:22.179 [2024-07-12 08:50:57.281848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.179 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.437 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.437 "name": "Existed_Raid", 00:24:22.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.437 "strip_size_kb": 64, 00:24:22.437 "state": "configuring", 00:24:22.437 "raid_level": "raid0", 00:24:22.437 "superblock": false, 00:24:22.437 "num_base_bdevs": 4, 00:24:22.437 "num_base_bdevs_discovered": 3, 00:24:22.437 "num_base_bdevs_operational": 4, 00:24:22.437 "base_bdevs_list": [ 00:24:22.437 { 00:24:22.437 "name": "BaseBdev1", 00:24:22.437 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:22.437 "is_configured": true, 00:24:22.437 "data_offset": 0, 00:24:22.437 "data_size": 65536 00:24:22.437 }, 00:24:22.437 { 00:24:22.437 "name": null, 00:24:22.437 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:22.437 "is_configured": false, 00:24:22.437 "data_offset": 0, 00:24:22.437 "data_size": 65536 00:24:22.437 }, 00:24:22.437 { 00:24:22.437 "name": "BaseBdev3", 00:24:22.437 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:22.437 "is_configured": true, 00:24:22.437 "data_offset": 0, 00:24:22.437 "data_size": 65536 00:24:22.437 }, 00:24:22.437 { 00:24:22.437 "name": "BaseBdev4", 00:24:22.437 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:22.437 "is_configured": true, 00:24:22.437 "data_offset": 0, 00:24:22.437 "data_size": 65536 00:24:22.437 } 00:24:22.437 ] 00:24:22.437 }' 00:24:22.438 08:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.438 08:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.370 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.370 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:23.370 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:23.370 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:23.627 [2024-07-12 08:50:58.698148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.627 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:23.627 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:23.627 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:23.627 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:23.627 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.627 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.628 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.628 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.628 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.628 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.628 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.628 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.885 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.885 "name": "Existed_Raid", 00:24:23.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.885 "strip_size_kb": 64, 00:24:23.885 "state": "configuring", 00:24:23.885 "raid_level": "raid0", 00:24:23.885 "superblock": false, 00:24:23.885 "num_base_bdevs": 4, 00:24:23.885 "num_base_bdevs_discovered": 2, 00:24:23.885 "num_base_bdevs_operational": 4, 00:24:23.885 "base_bdevs_list": [ 00:24:23.885 { 00:24:23.885 "name": null, 00:24:23.885 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:23.885 "is_configured": false, 00:24:23.885 "data_offset": 0, 00:24:23.885 "data_size": 65536 00:24:23.885 }, 00:24:23.885 { 00:24:23.885 "name": null, 00:24:23.885 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:23.885 "is_configured": false, 00:24:23.885 "data_offset": 0, 00:24:23.885 "data_size": 65536 00:24:23.885 }, 00:24:23.885 { 00:24:23.885 "name": "BaseBdev3", 00:24:23.885 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:23.885 "is_configured": true, 00:24:23.885 "data_offset": 0, 00:24:23.885 "data_size": 65536 00:24:23.885 }, 00:24:23.885 { 00:24:23.885 "name": "BaseBdev4", 00:24:23.885 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:23.885 "is_configured": true, 00:24:23.885 "data_offset": 0, 00:24:23.885 "data_size": 65536 00:24:23.885 } 00:24:23.885 ] 00:24:23.885 }' 00:24:23.885 08:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.885 08:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.892 08:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.892 08:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:24.892 08:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:24.892 08:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:25.150 [2024-07-12 08:51:00.168171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:25.150 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:25.150 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:25.150 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:25.150 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:25.150 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:25.150 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:25.151 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.151 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.151 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.151 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.151 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.151 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.409 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.409 "name": "Existed_Raid", 00:24:25.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.409 "strip_size_kb": 64, 00:24:25.409 "state": "configuring", 00:24:25.409 "raid_level": "raid0", 00:24:25.409 "superblock": false, 00:24:25.409 "num_base_bdevs": 4, 00:24:25.409 "num_base_bdevs_discovered": 3, 00:24:25.409 "num_base_bdevs_operational": 4, 00:24:25.409 "base_bdevs_list": [ 00:24:25.409 { 00:24:25.409 "name": null, 00:24:25.409 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:25.409 "is_configured": false, 00:24:25.409 "data_offset": 0, 00:24:25.409 "data_size": 65536 00:24:25.409 }, 00:24:25.409 { 00:24:25.409 "name": "BaseBdev2", 00:24:25.409 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:25.409 "is_configured": true, 00:24:25.409 "data_offset": 0, 00:24:25.409 "data_size": 65536 00:24:25.409 }, 00:24:25.409 { 00:24:25.409 "name": "BaseBdev3", 00:24:25.409 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:25.409 "is_configured": true, 00:24:25.409 "data_offset": 0, 00:24:25.409 "data_size": 65536 00:24:25.409 }, 00:24:25.409 { 00:24:25.409 "name": "BaseBdev4", 00:24:25.409 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:25.409 "is_configured": true, 00:24:25.409 "data_offset": 0, 00:24:25.409 "data_size": 65536 00:24:25.409 } 00:24:25.409 ] 00:24:25.409 }' 00:24:25.409 08:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.409 08:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.977 08:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.977 08:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:26.235 08:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:26.235 08:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:26.235 08:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.494 08:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 40eca429-bbfd-4d41-a6db-f136687c1c7b 00:24:26.753 [2024-07-12 08:51:01.847876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:26.753 [2024-07-12 08:51:01.847921] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:26.753 [2024-07-12 08:51:01.847930] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:26.753 [2024-07-12 08:51:01.848071] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:26.753 [2024-07-12 08:51:01.848453] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:26.753 [2024-07-12 08:51:01.848482] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:24:26.753 [2024-07-12 08:51:01.848744] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.753 NewBaseBdev 00:24:26.753 08:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:26.753 08:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:26.753 08:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:26.753 08:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:26.753 08:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:26.753 08:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:26.753 08:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:27.010 08:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:27.268 [ 00:24:27.268 { 00:24:27.268 "name": "NewBaseBdev", 00:24:27.268 "aliases": [ 00:24:27.268 "40eca429-bbfd-4d41-a6db-f136687c1c7b" 00:24:27.268 ], 00:24:27.268 "product_name": "Malloc disk", 00:24:27.268 "block_size": 512, 00:24:27.268 "num_blocks": 65536, 00:24:27.268 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:27.268 "assigned_rate_limits": { 00:24:27.268 "rw_ios_per_sec": 0, 00:24:27.268 "rw_mbytes_per_sec": 0, 00:24:27.268 "r_mbytes_per_sec": 0, 00:24:27.268 "w_mbytes_per_sec": 0 00:24:27.268 }, 00:24:27.268 "claimed": true, 00:24:27.268 "claim_type": "exclusive_write", 00:24:27.268 "zoned": false, 00:24:27.268 "supported_io_types": { 00:24:27.268 "read": true, 00:24:27.268 "write": true, 00:24:27.268 "unmap": true, 00:24:27.268 "flush": true, 00:24:27.268 "reset": true, 00:24:27.268 "nvme_admin": false, 00:24:27.268 "nvme_io": false, 00:24:27.268 "nvme_io_md": false, 00:24:27.268 "write_zeroes": true, 00:24:27.268 "zcopy": true, 00:24:27.268 "get_zone_info": false, 00:24:27.268 "zone_management": false, 00:24:27.268 "zone_append": false, 00:24:27.268 "compare": false, 00:24:27.268 "compare_and_write": false, 00:24:27.268 "abort": true, 00:24:27.268 "seek_hole": false, 00:24:27.268 "seek_data": false, 00:24:27.268 "copy": true, 00:24:27.268 "nvme_iov_md": false 00:24:27.268 }, 00:24:27.268 "memory_domains": [ 00:24:27.268 { 00:24:27.268 "dma_device_id": "system", 00:24:27.268 "dma_device_type": 1 00:24:27.268 }, 00:24:27.268 { 00:24:27.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.268 "dma_device_type": 2 00:24:27.268 } 00:24:27.268 ], 00:24:27.268 "driver_specific": {} 00:24:27.268 } 00:24:27.268 ] 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.268 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.525 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.525 "name": "Existed_Raid", 00:24:27.525 "uuid": "45a924a4-97ec-4814-8bbb-aeb2239dcaa5", 00:24:27.525 "strip_size_kb": 64, 00:24:27.525 "state": "online", 00:24:27.525 "raid_level": "raid0", 00:24:27.525 "superblock": false, 00:24:27.525 "num_base_bdevs": 4, 00:24:27.525 "num_base_bdevs_discovered": 4, 00:24:27.525 "num_base_bdevs_operational": 4, 00:24:27.525 "base_bdevs_list": [ 00:24:27.525 { 00:24:27.525 "name": "NewBaseBdev", 00:24:27.525 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:27.525 "is_configured": true, 00:24:27.525 "data_offset": 0, 00:24:27.525 "data_size": 65536 00:24:27.525 }, 00:24:27.525 { 00:24:27.525 "name": "BaseBdev2", 00:24:27.525 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:27.525 "is_configured": true, 00:24:27.525 "data_offset": 0, 00:24:27.525 "data_size": 65536 00:24:27.525 }, 00:24:27.525 { 00:24:27.525 "name": "BaseBdev3", 00:24:27.525 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:27.525 "is_configured": true, 00:24:27.525 "data_offset": 0, 00:24:27.525 "data_size": 65536 00:24:27.525 }, 00:24:27.525 { 00:24:27.525 "name": "BaseBdev4", 00:24:27.525 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:27.525 "is_configured": true, 00:24:27.525 "data_offset": 0, 00:24:27.525 "data_size": 65536 00:24:27.525 } 00:24:27.525 ] 00:24:27.525 }' 00:24:27.525 08:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.525 08:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:28.091 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:28.349 [2024-07-12 08:51:03.332694] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.349 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:28.349 "name": "Existed_Raid", 00:24:28.349 "aliases": [ 00:24:28.349 "45a924a4-97ec-4814-8bbb-aeb2239dcaa5" 00:24:28.349 ], 00:24:28.349 "product_name": "Raid Volume", 00:24:28.349 "block_size": 512, 00:24:28.349 "num_blocks": 262144, 00:24:28.349 "uuid": "45a924a4-97ec-4814-8bbb-aeb2239dcaa5", 00:24:28.349 "assigned_rate_limits": { 00:24:28.349 "rw_ios_per_sec": 0, 00:24:28.349 "rw_mbytes_per_sec": 0, 00:24:28.349 "r_mbytes_per_sec": 0, 00:24:28.349 "w_mbytes_per_sec": 0 00:24:28.349 }, 00:24:28.349 "claimed": false, 00:24:28.349 "zoned": false, 00:24:28.349 "supported_io_types": { 00:24:28.349 "read": true, 00:24:28.349 "write": true, 00:24:28.349 "unmap": true, 00:24:28.349 "flush": true, 00:24:28.349 "reset": true, 00:24:28.349 "nvme_admin": false, 00:24:28.349 "nvme_io": false, 00:24:28.349 "nvme_io_md": false, 00:24:28.349 "write_zeroes": true, 00:24:28.349 "zcopy": false, 00:24:28.349 "get_zone_info": false, 00:24:28.349 "zone_management": false, 00:24:28.349 "zone_append": false, 00:24:28.349 "compare": false, 00:24:28.349 "compare_and_write": false, 00:24:28.349 "abort": false, 00:24:28.349 "seek_hole": false, 00:24:28.349 "seek_data": false, 00:24:28.349 "copy": false, 00:24:28.349 "nvme_iov_md": false 00:24:28.349 }, 00:24:28.349 "memory_domains": [ 00:24:28.349 { 00:24:28.349 "dma_device_id": "system", 00:24:28.349 "dma_device_type": 1 00:24:28.349 }, 00:24:28.349 { 00:24:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.349 "dma_device_type": 2 00:24:28.349 }, 00:24:28.349 { 00:24:28.349 "dma_device_id": "system", 00:24:28.349 "dma_device_type": 1 00:24:28.349 }, 00:24:28.349 { 00:24:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.349 "dma_device_type": 2 00:24:28.349 }, 00:24:28.349 { 00:24:28.349 "dma_device_id": "system", 00:24:28.349 "dma_device_type": 1 00:24:28.349 }, 00:24:28.349 { 00:24:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.349 "dma_device_type": 2 00:24:28.349 }, 00:24:28.349 { 00:24:28.349 "dma_device_id": "system", 00:24:28.349 "dma_device_type": 1 00:24:28.349 }, 00:24:28.349 { 00:24:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.349 "dma_device_type": 2 00:24:28.349 } 00:24:28.349 ], 00:24:28.349 "driver_specific": { 00:24:28.349 "raid": { 00:24:28.349 "uuid": "45a924a4-97ec-4814-8bbb-aeb2239dcaa5", 00:24:28.349 "strip_size_kb": 64, 00:24:28.349 "state": "online", 00:24:28.349 "raid_level": "raid0", 00:24:28.349 "superblock": false, 00:24:28.349 "num_base_bdevs": 4, 00:24:28.349 "num_base_bdevs_discovered": 4, 00:24:28.349 "num_base_bdevs_operational": 4, 00:24:28.349 "base_bdevs_list": [ 00:24:28.349 { 00:24:28.350 "name": "NewBaseBdev", 00:24:28.350 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:28.350 "is_configured": true, 00:24:28.350 "data_offset": 0, 00:24:28.350 "data_size": 65536 00:24:28.350 }, 00:24:28.350 { 00:24:28.350 "name": "BaseBdev2", 00:24:28.350 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:28.350 "is_configured": true, 00:24:28.350 "data_offset": 0, 00:24:28.350 "data_size": 65536 00:24:28.350 }, 00:24:28.350 { 00:24:28.350 "name": "BaseBdev3", 00:24:28.350 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:28.350 "is_configured": true, 00:24:28.350 "data_offset": 0, 00:24:28.350 "data_size": 65536 00:24:28.350 }, 00:24:28.350 { 00:24:28.350 "name": "BaseBdev4", 00:24:28.350 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:28.350 "is_configured": true, 00:24:28.350 "data_offset": 0, 00:24:28.350 "data_size": 65536 00:24:28.350 } 00:24:28.350 ] 00:24:28.350 } 00:24:28.350 } 00:24:28.350 }' 00:24:28.350 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:28.350 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:28.350 BaseBdev2 00:24:28.350 BaseBdev3 00:24:28.350 BaseBdev4' 00:24:28.350 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:28.350 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:28.350 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:28.607 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:28.607 "name": "NewBaseBdev", 00:24:28.607 "aliases": [ 00:24:28.607 "40eca429-bbfd-4d41-a6db-f136687c1c7b" 00:24:28.607 ], 00:24:28.607 "product_name": "Malloc disk", 00:24:28.607 "block_size": 512, 00:24:28.607 "num_blocks": 65536, 00:24:28.607 "uuid": "40eca429-bbfd-4d41-a6db-f136687c1c7b", 00:24:28.607 "assigned_rate_limits": { 00:24:28.607 "rw_ios_per_sec": 0, 00:24:28.607 "rw_mbytes_per_sec": 0, 00:24:28.607 "r_mbytes_per_sec": 0, 00:24:28.607 "w_mbytes_per_sec": 0 00:24:28.607 }, 00:24:28.607 "claimed": true, 00:24:28.607 "claim_type": "exclusive_write", 00:24:28.607 "zoned": false, 00:24:28.607 "supported_io_types": { 00:24:28.607 "read": true, 00:24:28.607 "write": true, 00:24:28.607 "unmap": true, 00:24:28.607 "flush": true, 00:24:28.607 "reset": true, 00:24:28.607 "nvme_admin": false, 00:24:28.607 "nvme_io": false, 00:24:28.607 "nvme_io_md": false, 00:24:28.607 "write_zeroes": true, 00:24:28.607 "zcopy": true, 00:24:28.607 "get_zone_info": false, 00:24:28.607 "zone_management": false, 00:24:28.607 "zone_append": false, 00:24:28.607 "compare": false, 00:24:28.607 "compare_and_write": false, 00:24:28.607 "abort": true, 00:24:28.607 "seek_hole": false, 00:24:28.607 "seek_data": false, 00:24:28.607 "copy": true, 00:24:28.607 "nvme_iov_md": false 00:24:28.607 }, 00:24:28.607 "memory_domains": [ 00:24:28.607 { 00:24:28.607 "dma_device_id": "system", 00:24:28.607 "dma_device_type": 1 00:24:28.607 }, 00:24:28.607 { 00:24:28.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.607 "dma_device_type": 2 00:24:28.607 } 00:24:28.607 ], 00:24:28.607 "driver_specific": {} 00:24:28.607 }' 00:24:28.607 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:28.607 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:28.607 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:28.607 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:28.607 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:28.865 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:28.865 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:28.865 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:28.865 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:28.865 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:28.865 08:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:28.865 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:28.865 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:28.865 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:28.865 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:29.124 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:29.124 "name": "BaseBdev2", 00:24:29.124 "aliases": [ 00:24:29.124 "4d12f69c-9bbe-47f9-a357-e8a3117e91e8" 00:24:29.124 ], 00:24:29.124 "product_name": "Malloc disk", 00:24:29.124 "block_size": 512, 00:24:29.124 "num_blocks": 65536, 00:24:29.124 "uuid": "4d12f69c-9bbe-47f9-a357-e8a3117e91e8", 00:24:29.124 "assigned_rate_limits": { 00:24:29.124 "rw_ios_per_sec": 0, 00:24:29.124 "rw_mbytes_per_sec": 0, 00:24:29.124 "r_mbytes_per_sec": 0, 00:24:29.124 "w_mbytes_per_sec": 0 00:24:29.124 }, 00:24:29.124 "claimed": true, 00:24:29.124 "claim_type": "exclusive_write", 00:24:29.124 "zoned": false, 00:24:29.124 "supported_io_types": { 00:24:29.124 "read": true, 00:24:29.124 "write": true, 00:24:29.124 "unmap": true, 00:24:29.124 "flush": true, 00:24:29.124 "reset": true, 00:24:29.124 "nvme_admin": false, 00:24:29.124 "nvme_io": false, 00:24:29.124 "nvme_io_md": false, 00:24:29.124 "write_zeroes": true, 00:24:29.124 "zcopy": true, 00:24:29.124 "get_zone_info": false, 00:24:29.124 "zone_management": false, 00:24:29.124 "zone_append": false, 00:24:29.124 "compare": false, 00:24:29.124 "compare_and_write": false, 00:24:29.124 "abort": true, 00:24:29.124 "seek_hole": false, 00:24:29.124 "seek_data": false, 00:24:29.124 "copy": true, 00:24:29.124 "nvme_iov_md": false 00:24:29.124 }, 00:24:29.124 "memory_domains": [ 00:24:29.124 { 00:24:29.124 "dma_device_id": "system", 00:24:29.124 "dma_device_type": 1 00:24:29.124 }, 00:24:29.124 { 00:24:29.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.124 "dma_device_type": 2 00:24:29.124 } 00:24:29.124 ], 00:24:29.124 "driver_specific": {} 00:24:29.124 }' 00:24:29.124 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.383 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.383 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:29.383 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.383 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.383 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:29.383 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:29.383 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:29.641 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:29.641 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:29.641 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:29.641 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:29.641 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:29.641 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:29.641 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:29.898 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:29.898 "name": "BaseBdev3", 00:24:29.898 "aliases": [ 00:24:29.898 "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e" 00:24:29.898 ], 00:24:29.898 "product_name": "Malloc disk", 00:24:29.898 "block_size": 512, 00:24:29.898 "num_blocks": 65536, 00:24:29.898 "uuid": "f24762b1-9cf6-4ca5-bde7-8d6aa4452c4e", 00:24:29.898 "assigned_rate_limits": { 00:24:29.899 "rw_ios_per_sec": 0, 00:24:29.899 "rw_mbytes_per_sec": 0, 00:24:29.899 "r_mbytes_per_sec": 0, 00:24:29.899 "w_mbytes_per_sec": 0 00:24:29.899 }, 00:24:29.899 "claimed": true, 00:24:29.899 "claim_type": "exclusive_write", 00:24:29.899 "zoned": false, 00:24:29.899 "supported_io_types": { 00:24:29.899 "read": true, 00:24:29.899 "write": true, 00:24:29.899 "unmap": true, 00:24:29.899 "flush": true, 00:24:29.899 "reset": true, 00:24:29.899 "nvme_admin": false, 00:24:29.899 "nvme_io": false, 00:24:29.899 "nvme_io_md": false, 00:24:29.899 "write_zeroes": true, 00:24:29.899 "zcopy": true, 00:24:29.899 "get_zone_info": false, 00:24:29.899 "zone_management": false, 00:24:29.899 "zone_append": false, 00:24:29.899 "compare": false, 00:24:29.899 "compare_and_write": false, 00:24:29.899 "abort": true, 00:24:29.899 "seek_hole": false, 00:24:29.899 "seek_data": false, 00:24:29.899 "copy": true, 00:24:29.899 "nvme_iov_md": false 00:24:29.899 }, 00:24:29.899 "memory_domains": [ 00:24:29.899 { 00:24:29.899 "dma_device_id": "system", 00:24:29.899 "dma_device_type": 1 00:24:29.899 }, 00:24:29.899 { 00:24:29.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.899 "dma_device_type": 2 00:24:29.899 } 00:24:29.899 ], 00:24:29.899 "driver_specific": {} 00:24:29.899 }' 00:24:29.899 08:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.899 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.899 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:29.899 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.157 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.157 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:30.157 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.157 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.157 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.157 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.415 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.415 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.415 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.415 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:30.415 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.673 "name": "BaseBdev4", 00:24:30.673 "aliases": [ 00:24:30.673 "0097ba4c-7d0f-40a9-8c94-800b9351a01f" 00:24:30.673 ], 00:24:30.673 "product_name": "Malloc disk", 00:24:30.673 "block_size": 512, 00:24:30.673 "num_blocks": 65536, 00:24:30.673 "uuid": "0097ba4c-7d0f-40a9-8c94-800b9351a01f", 00:24:30.673 "assigned_rate_limits": { 00:24:30.673 "rw_ios_per_sec": 0, 00:24:30.673 "rw_mbytes_per_sec": 0, 00:24:30.673 "r_mbytes_per_sec": 0, 00:24:30.673 "w_mbytes_per_sec": 0 00:24:30.673 }, 00:24:30.673 "claimed": true, 00:24:30.673 "claim_type": "exclusive_write", 00:24:30.673 "zoned": false, 00:24:30.673 "supported_io_types": { 00:24:30.673 "read": true, 00:24:30.673 "write": true, 00:24:30.673 "unmap": true, 00:24:30.673 "flush": true, 00:24:30.673 "reset": true, 00:24:30.673 "nvme_admin": false, 00:24:30.673 "nvme_io": false, 00:24:30.673 "nvme_io_md": false, 00:24:30.673 "write_zeroes": true, 00:24:30.673 "zcopy": true, 00:24:30.673 "get_zone_info": false, 00:24:30.673 "zone_management": false, 00:24:30.673 "zone_append": false, 00:24:30.673 "compare": false, 00:24:30.673 "compare_and_write": false, 00:24:30.673 "abort": true, 00:24:30.673 "seek_hole": false, 00:24:30.673 "seek_data": false, 00:24:30.673 "copy": true, 00:24:30.673 "nvme_iov_md": false 00:24:30.673 }, 00:24:30.673 "memory_domains": [ 00:24:30.673 { 00:24:30.673 "dma_device_id": "system", 00:24:30.673 "dma_device_type": 1 00:24:30.673 }, 00:24:30.673 { 00:24:30.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.673 "dma_device_type": 2 00:24:30.673 } 00:24:30.673 ], 00:24:30.673 "driver_specific": {} 00:24:30.673 }' 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.673 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.931 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.931 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.931 08:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.931 08:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.931 08:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.189 [2024-07-12 08:51:06.214149] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.189 [2024-07-12 08:51:06.214189] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:31.189 [2024-07-12 08:51:06.214297] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:31.189 [2024-07-12 08:51:06.214425] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:31.189 [2024-07-12 08:51:06.214449] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 135629 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 135629 ']' 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 135629 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135629 00:24:31.189 killing process with pid 135629 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135629' 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 135629 00:24:31.189 08:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 135629 00:24:31.189 [2024-07-12 08:51:06.245392] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:31.447 [2024-07-12 08:51:06.502146] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:32.383 ************************************ 00:24:32.383 END TEST raid_state_function_test 00:24:32.383 ************************************ 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:32.383 00:24:32.383 real 0m35.111s 00:24:32.383 user 1m6.087s 00:24:32.383 sys 0m3.823s 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.383 08:51:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:32.383 08:51:07 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:24:32.383 08:51:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:32.383 08:51:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.383 08:51:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:32.383 ************************************ 00:24:32.383 START TEST raid_state_function_test_sb 00:24:32.383 ************************************ 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=136790 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136790' 00:24:32.383 Process raid pid: 136790 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 136790 /var/tmp/spdk-raid.sock 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 136790 ']' 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:32.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.383 08:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.642 [2024-07-12 08:51:07.613733] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:24:32.642 [2024-07-12 08:51:07.613952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.642 [2024-07-12 08:51:07.786161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.900 [2024-07-12 08:51:08.015085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.159 [2024-07-12 08:51:08.185492] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:33.417 08:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.417 08:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:24:33.417 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:33.675 [2024-07-12 08:51:08.743115] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:33.675 [2024-07-12 08:51:08.743221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:33.675 [2024-07-12 08:51:08.743252] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:33.675 [2024-07-12 08:51:08.743277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:33.675 [2024-07-12 08:51:08.743286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:33.675 [2024-07-12 08:51:08.743301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:33.675 [2024-07-12 08:51:08.743309] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:33.675 [2024-07-12 08:51:08.743345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:33.675 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:33.675 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:33.675 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:33.675 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:33.675 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:33.676 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:33.676 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:33.676 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:33.676 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:33.676 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:33.676 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.676 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.934 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.934 "name": "Existed_Raid", 00:24:33.934 "uuid": "d3a3f505-53d7-4ecd-a53b-f961213ba8b3", 00:24:33.934 "strip_size_kb": 64, 00:24:33.934 "state": "configuring", 00:24:33.934 "raid_level": "raid0", 00:24:33.934 "superblock": true, 00:24:33.934 "num_base_bdevs": 4, 00:24:33.934 "num_base_bdevs_discovered": 0, 00:24:33.934 "num_base_bdevs_operational": 4, 00:24:33.934 "base_bdevs_list": [ 00:24:33.934 { 00:24:33.934 "name": "BaseBdev1", 00:24:33.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.934 "is_configured": false, 00:24:33.934 "data_offset": 0, 00:24:33.934 "data_size": 0 00:24:33.934 }, 00:24:33.934 { 00:24:33.934 "name": "BaseBdev2", 00:24:33.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.934 "is_configured": false, 00:24:33.934 "data_offset": 0, 00:24:33.934 "data_size": 0 00:24:33.934 }, 00:24:33.934 { 00:24:33.934 "name": "BaseBdev3", 00:24:33.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.934 "is_configured": false, 00:24:33.934 "data_offset": 0, 00:24:33.934 "data_size": 0 00:24:33.934 }, 00:24:33.934 { 00:24:33.934 "name": "BaseBdev4", 00:24:33.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.934 "is_configured": false, 00:24:33.934 "data_offset": 0, 00:24:33.934 "data_size": 0 00:24:33.934 } 00:24:33.935 ] 00:24:33.935 }' 00:24:33.935 08:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.935 08:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.867 08:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:34.867 [2024-07-12 08:51:09.895174] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:34.867 [2024-07-12 08:51:09.895215] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:34.867 08:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:35.125 [2024-07-12 08:51:10.167284] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:35.126 [2024-07-12 08:51:10.167375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:35.126 [2024-07-12 08:51:10.167404] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:35.126 [2024-07-12 08:51:10.167464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:35.126 [2024-07-12 08:51:10.167474] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:35.126 [2024-07-12 08:51:10.167505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:35.126 [2024-07-12 08:51:10.167513] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:35.126 [2024-07-12 08:51:10.167535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:35.126 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:35.384 [2024-07-12 08:51:10.458553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:35.384 BaseBdev1 00:24:35.384 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:35.385 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:35.385 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:35.385 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:35.385 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:35.385 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:35.385 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:35.643 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:35.902 [ 00:24:35.902 { 00:24:35.902 "name": "BaseBdev1", 00:24:35.902 "aliases": [ 00:24:35.902 "30dd3593-98de-47c5-9f35-336bee32a872" 00:24:35.902 ], 00:24:35.902 "product_name": "Malloc disk", 00:24:35.902 "block_size": 512, 00:24:35.902 "num_blocks": 65536, 00:24:35.902 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:35.902 "assigned_rate_limits": { 00:24:35.902 "rw_ios_per_sec": 0, 00:24:35.902 "rw_mbytes_per_sec": 0, 00:24:35.902 "r_mbytes_per_sec": 0, 00:24:35.902 "w_mbytes_per_sec": 0 00:24:35.902 }, 00:24:35.902 "claimed": true, 00:24:35.902 "claim_type": "exclusive_write", 00:24:35.902 "zoned": false, 00:24:35.902 "supported_io_types": { 00:24:35.902 "read": true, 00:24:35.902 "write": true, 00:24:35.902 "unmap": true, 00:24:35.902 "flush": true, 00:24:35.902 "reset": true, 00:24:35.902 "nvme_admin": false, 00:24:35.902 "nvme_io": false, 00:24:35.902 "nvme_io_md": false, 00:24:35.902 "write_zeroes": true, 00:24:35.902 "zcopy": true, 00:24:35.902 "get_zone_info": false, 00:24:35.902 "zone_management": false, 00:24:35.902 "zone_append": false, 00:24:35.902 "compare": false, 00:24:35.902 "compare_and_write": false, 00:24:35.902 "abort": true, 00:24:35.902 "seek_hole": false, 00:24:35.902 "seek_data": false, 00:24:35.902 "copy": true, 00:24:35.902 "nvme_iov_md": false 00:24:35.902 }, 00:24:35.902 "memory_domains": [ 00:24:35.902 { 00:24:35.902 "dma_device_id": "system", 00:24:35.902 "dma_device_type": 1 00:24:35.902 }, 00:24:35.902 { 00:24:35.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.902 "dma_device_type": 2 00:24:35.902 } 00:24:35.902 ], 00:24:35.902 "driver_specific": {} 00:24:35.902 } 00:24:35.902 ] 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.902 08:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.160 08:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.160 "name": "Existed_Raid", 00:24:36.160 "uuid": "0df39e3f-7c9b-470f-8724-67a734b90182", 00:24:36.160 "strip_size_kb": 64, 00:24:36.160 "state": "configuring", 00:24:36.160 "raid_level": "raid0", 00:24:36.160 "superblock": true, 00:24:36.160 "num_base_bdevs": 4, 00:24:36.160 "num_base_bdevs_discovered": 1, 00:24:36.160 "num_base_bdevs_operational": 4, 00:24:36.160 "base_bdevs_list": [ 00:24:36.160 { 00:24:36.160 "name": "BaseBdev1", 00:24:36.160 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:36.160 "is_configured": true, 00:24:36.160 "data_offset": 2048, 00:24:36.160 "data_size": 63488 00:24:36.160 }, 00:24:36.160 { 00:24:36.160 "name": "BaseBdev2", 00:24:36.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.160 "is_configured": false, 00:24:36.160 "data_offset": 0, 00:24:36.160 "data_size": 0 00:24:36.160 }, 00:24:36.160 { 00:24:36.160 "name": "BaseBdev3", 00:24:36.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.160 "is_configured": false, 00:24:36.160 "data_offset": 0, 00:24:36.160 "data_size": 0 00:24:36.160 }, 00:24:36.160 { 00:24:36.160 "name": "BaseBdev4", 00:24:36.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.160 "is_configured": false, 00:24:36.160 "data_offset": 0, 00:24:36.160 "data_size": 0 00:24:36.160 } 00:24:36.160 ] 00:24:36.160 }' 00:24:36.160 08:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.160 08:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:36.726 08:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:36.984 [2024-07-12 08:51:11.978935] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:36.984 [2024-07-12 08:51:11.979014] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:24:36.984 08:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:37.243 [2024-07-12 08:51:12.190997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:37.243 [2024-07-12 08:51:12.192991] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:37.243 [2024-07-12 08:51:12.193064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:37.243 [2024-07-12 08:51:12.193092] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:37.243 [2024-07-12 08:51:12.193117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:37.243 [2024-07-12 08:51:12.193126] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:37.243 [2024-07-12 08:51:12.193153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.243 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:37.502 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.502 "name": "Existed_Raid", 00:24:37.502 "uuid": "7d4b23e0-3767-48f8-ab16-8abd4b672c13", 00:24:37.502 "strip_size_kb": 64, 00:24:37.502 "state": "configuring", 00:24:37.502 "raid_level": "raid0", 00:24:37.502 "superblock": true, 00:24:37.502 "num_base_bdevs": 4, 00:24:37.502 "num_base_bdevs_discovered": 1, 00:24:37.502 "num_base_bdevs_operational": 4, 00:24:37.502 "base_bdevs_list": [ 00:24:37.502 { 00:24:37.502 "name": "BaseBdev1", 00:24:37.502 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:37.502 "is_configured": true, 00:24:37.502 "data_offset": 2048, 00:24:37.502 "data_size": 63488 00:24:37.502 }, 00:24:37.502 { 00:24:37.502 "name": "BaseBdev2", 00:24:37.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.502 "is_configured": false, 00:24:37.502 "data_offset": 0, 00:24:37.502 "data_size": 0 00:24:37.502 }, 00:24:37.502 { 00:24:37.502 "name": "BaseBdev3", 00:24:37.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.502 "is_configured": false, 00:24:37.502 "data_offset": 0, 00:24:37.502 "data_size": 0 00:24:37.502 }, 00:24:37.502 { 00:24:37.502 "name": "BaseBdev4", 00:24:37.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.502 "is_configured": false, 00:24:37.502 "data_offset": 0, 00:24:37.502 "data_size": 0 00:24:37.502 } 00:24:37.502 ] 00:24:37.502 }' 00:24:37.502 08:51:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.502 08:51:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.069 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:38.326 [2024-07-12 08:51:13.370441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.326 BaseBdev2 00:24:38.326 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:38.327 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:38.327 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:38.327 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:38.327 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:38.327 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:38.327 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:38.584 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:38.850 [ 00:24:38.850 { 00:24:38.850 "name": "BaseBdev2", 00:24:38.850 "aliases": [ 00:24:38.850 "e397b2bc-5f5e-4c1e-96b0-71585b0b8500" 00:24:38.850 ], 00:24:38.850 "product_name": "Malloc disk", 00:24:38.850 "block_size": 512, 00:24:38.850 "num_blocks": 65536, 00:24:38.850 "uuid": "e397b2bc-5f5e-4c1e-96b0-71585b0b8500", 00:24:38.850 "assigned_rate_limits": { 00:24:38.850 "rw_ios_per_sec": 0, 00:24:38.850 "rw_mbytes_per_sec": 0, 00:24:38.850 "r_mbytes_per_sec": 0, 00:24:38.850 "w_mbytes_per_sec": 0 00:24:38.850 }, 00:24:38.850 "claimed": true, 00:24:38.850 "claim_type": "exclusive_write", 00:24:38.850 "zoned": false, 00:24:38.850 "supported_io_types": { 00:24:38.850 "read": true, 00:24:38.850 "write": true, 00:24:38.850 "unmap": true, 00:24:38.850 "flush": true, 00:24:38.850 "reset": true, 00:24:38.850 "nvme_admin": false, 00:24:38.850 "nvme_io": false, 00:24:38.850 "nvme_io_md": false, 00:24:38.850 "write_zeroes": true, 00:24:38.850 "zcopy": true, 00:24:38.850 "get_zone_info": false, 00:24:38.850 "zone_management": false, 00:24:38.850 "zone_append": false, 00:24:38.850 "compare": false, 00:24:38.850 "compare_and_write": false, 00:24:38.850 "abort": true, 00:24:38.850 "seek_hole": false, 00:24:38.850 "seek_data": false, 00:24:38.850 "copy": true, 00:24:38.850 "nvme_iov_md": false 00:24:38.850 }, 00:24:38.850 "memory_domains": [ 00:24:38.850 { 00:24:38.850 "dma_device_id": "system", 00:24:38.850 "dma_device_type": 1 00:24:38.850 }, 00:24:38.850 { 00:24:38.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.850 "dma_device_type": 2 00:24:38.850 } 00:24:38.850 ], 00:24:38.850 "driver_specific": {} 00:24:38.850 } 00:24:38.850 ] 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.850 08:51:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.123 08:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.123 "name": "Existed_Raid", 00:24:39.123 "uuid": "7d4b23e0-3767-48f8-ab16-8abd4b672c13", 00:24:39.123 "strip_size_kb": 64, 00:24:39.123 "state": "configuring", 00:24:39.123 "raid_level": "raid0", 00:24:39.123 "superblock": true, 00:24:39.123 "num_base_bdevs": 4, 00:24:39.123 "num_base_bdevs_discovered": 2, 00:24:39.123 "num_base_bdevs_operational": 4, 00:24:39.123 "base_bdevs_list": [ 00:24:39.123 { 00:24:39.123 "name": "BaseBdev1", 00:24:39.123 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:39.123 "is_configured": true, 00:24:39.123 "data_offset": 2048, 00:24:39.123 "data_size": 63488 00:24:39.123 }, 00:24:39.123 { 00:24:39.123 "name": "BaseBdev2", 00:24:39.123 "uuid": "e397b2bc-5f5e-4c1e-96b0-71585b0b8500", 00:24:39.123 "is_configured": true, 00:24:39.123 "data_offset": 2048, 00:24:39.123 "data_size": 63488 00:24:39.123 }, 00:24:39.123 { 00:24:39.123 "name": "BaseBdev3", 00:24:39.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.123 "is_configured": false, 00:24:39.123 "data_offset": 0, 00:24:39.123 "data_size": 0 00:24:39.123 }, 00:24:39.123 { 00:24:39.123 "name": "BaseBdev4", 00:24:39.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.123 "is_configured": false, 00:24:39.123 "data_offset": 0, 00:24:39.123 "data_size": 0 00:24:39.123 } 00:24:39.123 ] 00:24:39.123 }' 00:24:39.123 08:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.123 08:51:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:39.689 08:51:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:39.948 [2024-07-12 08:51:14.994307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:39.948 BaseBdev3 00:24:39.948 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:39.948 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:39.948 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:39.948 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:39.948 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:39.948 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:39.948 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:40.206 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:40.465 [ 00:24:40.465 { 00:24:40.465 "name": "BaseBdev3", 00:24:40.466 "aliases": [ 00:24:40.466 "d34abca0-c306-4fcb-97a1-a0987792f947" 00:24:40.466 ], 00:24:40.466 "product_name": "Malloc disk", 00:24:40.466 "block_size": 512, 00:24:40.466 "num_blocks": 65536, 00:24:40.466 "uuid": "d34abca0-c306-4fcb-97a1-a0987792f947", 00:24:40.466 "assigned_rate_limits": { 00:24:40.466 "rw_ios_per_sec": 0, 00:24:40.466 "rw_mbytes_per_sec": 0, 00:24:40.466 "r_mbytes_per_sec": 0, 00:24:40.466 "w_mbytes_per_sec": 0 00:24:40.466 }, 00:24:40.466 "claimed": true, 00:24:40.466 "claim_type": "exclusive_write", 00:24:40.466 "zoned": false, 00:24:40.466 "supported_io_types": { 00:24:40.466 "read": true, 00:24:40.466 "write": true, 00:24:40.466 "unmap": true, 00:24:40.466 "flush": true, 00:24:40.466 "reset": true, 00:24:40.466 "nvme_admin": false, 00:24:40.466 "nvme_io": false, 00:24:40.466 "nvme_io_md": false, 00:24:40.466 "write_zeroes": true, 00:24:40.466 "zcopy": true, 00:24:40.466 "get_zone_info": false, 00:24:40.466 "zone_management": false, 00:24:40.466 "zone_append": false, 00:24:40.466 "compare": false, 00:24:40.466 "compare_and_write": false, 00:24:40.466 "abort": true, 00:24:40.466 "seek_hole": false, 00:24:40.466 "seek_data": false, 00:24:40.466 "copy": true, 00:24:40.466 "nvme_iov_md": false 00:24:40.466 }, 00:24:40.466 "memory_domains": [ 00:24:40.466 { 00:24:40.466 "dma_device_id": "system", 00:24:40.466 "dma_device_type": 1 00:24:40.466 }, 00:24:40.466 { 00:24:40.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.466 "dma_device_type": 2 00:24:40.466 } 00:24:40.466 ], 00:24:40.466 "driver_specific": {} 00:24:40.466 } 00:24:40.466 ] 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.466 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.724 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.724 "name": "Existed_Raid", 00:24:40.724 "uuid": "7d4b23e0-3767-48f8-ab16-8abd4b672c13", 00:24:40.724 "strip_size_kb": 64, 00:24:40.724 "state": "configuring", 00:24:40.724 "raid_level": "raid0", 00:24:40.724 "superblock": true, 00:24:40.724 "num_base_bdevs": 4, 00:24:40.724 "num_base_bdevs_discovered": 3, 00:24:40.724 "num_base_bdevs_operational": 4, 00:24:40.724 "base_bdevs_list": [ 00:24:40.724 { 00:24:40.724 "name": "BaseBdev1", 00:24:40.724 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:40.724 "is_configured": true, 00:24:40.724 "data_offset": 2048, 00:24:40.724 "data_size": 63488 00:24:40.724 }, 00:24:40.724 { 00:24:40.724 "name": "BaseBdev2", 00:24:40.724 "uuid": "e397b2bc-5f5e-4c1e-96b0-71585b0b8500", 00:24:40.724 "is_configured": true, 00:24:40.725 "data_offset": 2048, 00:24:40.725 "data_size": 63488 00:24:40.725 }, 00:24:40.725 { 00:24:40.725 "name": "BaseBdev3", 00:24:40.725 "uuid": "d34abca0-c306-4fcb-97a1-a0987792f947", 00:24:40.725 "is_configured": true, 00:24:40.725 "data_offset": 2048, 00:24:40.725 "data_size": 63488 00:24:40.725 }, 00:24:40.725 { 00:24:40.725 "name": "BaseBdev4", 00:24:40.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.725 "is_configured": false, 00:24:40.725 "data_offset": 0, 00:24:40.725 "data_size": 0 00:24:40.725 } 00:24:40.725 ] 00:24:40.725 }' 00:24:40.725 08:51:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.725 08:51:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.290 08:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:41.548 [2024-07-12 08:51:16.668569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:41.548 [2024-07-12 08:51:16.668889] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:24:41.548 [2024-07-12 08:51:16.668904] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:41.548 BaseBdev4 00:24:41.548 [2024-07-12 08:51:16.669063] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:41.548 [2024-07-12 08:51:16.669427] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:24:41.548 [2024-07-12 08:51:16.669454] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:24:41.548 [2024-07-12 08:51:16.669594] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.548 08:51:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:41.548 08:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:41.548 08:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:41.548 08:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:41.548 08:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:41.548 08:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:41.548 08:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:41.806 08:51:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:42.065 [ 00:24:42.065 { 00:24:42.065 "name": "BaseBdev4", 00:24:42.065 "aliases": [ 00:24:42.065 "ec581ad6-6158-4e9d-b5ef-ffa704975943" 00:24:42.065 ], 00:24:42.065 "product_name": "Malloc disk", 00:24:42.065 "block_size": 512, 00:24:42.065 "num_blocks": 65536, 00:24:42.065 "uuid": "ec581ad6-6158-4e9d-b5ef-ffa704975943", 00:24:42.065 "assigned_rate_limits": { 00:24:42.065 "rw_ios_per_sec": 0, 00:24:42.065 "rw_mbytes_per_sec": 0, 00:24:42.065 "r_mbytes_per_sec": 0, 00:24:42.065 "w_mbytes_per_sec": 0 00:24:42.065 }, 00:24:42.065 "claimed": true, 00:24:42.065 "claim_type": "exclusive_write", 00:24:42.065 "zoned": false, 00:24:42.065 "supported_io_types": { 00:24:42.065 "read": true, 00:24:42.065 "write": true, 00:24:42.065 "unmap": true, 00:24:42.065 "flush": true, 00:24:42.065 "reset": true, 00:24:42.065 "nvme_admin": false, 00:24:42.065 "nvme_io": false, 00:24:42.065 "nvme_io_md": false, 00:24:42.065 "write_zeroes": true, 00:24:42.065 "zcopy": true, 00:24:42.065 "get_zone_info": false, 00:24:42.065 "zone_management": false, 00:24:42.065 "zone_append": false, 00:24:42.065 "compare": false, 00:24:42.065 "compare_and_write": false, 00:24:42.065 "abort": true, 00:24:42.065 "seek_hole": false, 00:24:42.065 "seek_data": false, 00:24:42.065 "copy": true, 00:24:42.065 "nvme_iov_md": false 00:24:42.065 }, 00:24:42.065 "memory_domains": [ 00:24:42.065 { 00:24:42.065 "dma_device_id": "system", 00:24:42.065 "dma_device_type": 1 00:24:42.065 }, 00:24:42.065 { 00:24:42.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.065 "dma_device_type": 2 00:24:42.065 } 00:24:42.065 ], 00:24:42.065 "driver_specific": {} 00:24:42.065 } 00:24:42.065 ] 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.065 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.323 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.323 "name": "Existed_Raid", 00:24:42.323 "uuid": "7d4b23e0-3767-48f8-ab16-8abd4b672c13", 00:24:42.323 "strip_size_kb": 64, 00:24:42.323 "state": "online", 00:24:42.323 "raid_level": "raid0", 00:24:42.323 "superblock": true, 00:24:42.323 "num_base_bdevs": 4, 00:24:42.323 "num_base_bdevs_discovered": 4, 00:24:42.323 "num_base_bdevs_operational": 4, 00:24:42.323 "base_bdevs_list": [ 00:24:42.323 { 00:24:42.323 "name": "BaseBdev1", 00:24:42.323 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:42.323 "is_configured": true, 00:24:42.323 "data_offset": 2048, 00:24:42.323 "data_size": 63488 00:24:42.323 }, 00:24:42.323 { 00:24:42.323 "name": "BaseBdev2", 00:24:42.323 "uuid": "e397b2bc-5f5e-4c1e-96b0-71585b0b8500", 00:24:42.323 "is_configured": true, 00:24:42.323 "data_offset": 2048, 00:24:42.323 "data_size": 63488 00:24:42.323 }, 00:24:42.323 { 00:24:42.323 "name": "BaseBdev3", 00:24:42.323 "uuid": "d34abca0-c306-4fcb-97a1-a0987792f947", 00:24:42.323 "is_configured": true, 00:24:42.323 "data_offset": 2048, 00:24:42.323 "data_size": 63488 00:24:42.323 }, 00:24:42.323 { 00:24:42.323 "name": "BaseBdev4", 00:24:42.323 "uuid": "ec581ad6-6158-4e9d-b5ef-ffa704975943", 00:24:42.323 "is_configured": true, 00:24:42.323 "data_offset": 2048, 00:24:42.323 "data_size": 63488 00:24:42.323 } 00:24:42.323 ] 00:24:42.323 }' 00:24:42.323 08:51:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.323 08:51:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:42.890 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:43.149 [2024-07-12 08:51:18.281406] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:43.149 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:43.149 "name": "Existed_Raid", 00:24:43.149 "aliases": [ 00:24:43.149 "7d4b23e0-3767-48f8-ab16-8abd4b672c13" 00:24:43.149 ], 00:24:43.149 "product_name": "Raid Volume", 00:24:43.149 "block_size": 512, 00:24:43.149 "num_blocks": 253952, 00:24:43.149 "uuid": "7d4b23e0-3767-48f8-ab16-8abd4b672c13", 00:24:43.149 "assigned_rate_limits": { 00:24:43.149 "rw_ios_per_sec": 0, 00:24:43.149 "rw_mbytes_per_sec": 0, 00:24:43.149 "r_mbytes_per_sec": 0, 00:24:43.149 "w_mbytes_per_sec": 0 00:24:43.149 }, 00:24:43.149 "claimed": false, 00:24:43.149 "zoned": false, 00:24:43.149 "supported_io_types": { 00:24:43.149 "read": true, 00:24:43.149 "write": true, 00:24:43.149 "unmap": true, 00:24:43.149 "flush": true, 00:24:43.149 "reset": true, 00:24:43.149 "nvme_admin": false, 00:24:43.149 "nvme_io": false, 00:24:43.149 "nvme_io_md": false, 00:24:43.149 "write_zeroes": true, 00:24:43.149 "zcopy": false, 00:24:43.149 "get_zone_info": false, 00:24:43.149 "zone_management": false, 00:24:43.149 "zone_append": false, 00:24:43.149 "compare": false, 00:24:43.149 "compare_and_write": false, 00:24:43.149 "abort": false, 00:24:43.149 "seek_hole": false, 00:24:43.149 "seek_data": false, 00:24:43.149 "copy": false, 00:24:43.149 "nvme_iov_md": false 00:24:43.149 }, 00:24:43.149 "memory_domains": [ 00:24:43.149 { 00:24:43.149 "dma_device_id": "system", 00:24:43.149 "dma_device_type": 1 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.149 "dma_device_type": 2 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "dma_device_id": "system", 00:24:43.149 "dma_device_type": 1 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.149 "dma_device_type": 2 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "dma_device_id": "system", 00:24:43.149 "dma_device_type": 1 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.149 "dma_device_type": 2 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "dma_device_id": "system", 00:24:43.149 "dma_device_type": 1 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.149 "dma_device_type": 2 00:24:43.149 } 00:24:43.149 ], 00:24:43.149 "driver_specific": { 00:24:43.149 "raid": { 00:24:43.149 "uuid": "7d4b23e0-3767-48f8-ab16-8abd4b672c13", 00:24:43.149 "strip_size_kb": 64, 00:24:43.149 "state": "online", 00:24:43.149 "raid_level": "raid0", 00:24:43.149 "superblock": true, 00:24:43.149 "num_base_bdevs": 4, 00:24:43.149 "num_base_bdevs_discovered": 4, 00:24:43.149 "num_base_bdevs_operational": 4, 00:24:43.149 "base_bdevs_list": [ 00:24:43.149 { 00:24:43.149 "name": "BaseBdev1", 00:24:43.149 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:43.149 "is_configured": true, 00:24:43.149 "data_offset": 2048, 00:24:43.149 "data_size": 63488 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "name": "BaseBdev2", 00:24:43.149 "uuid": "e397b2bc-5f5e-4c1e-96b0-71585b0b8500", 00:24:43.149 "is_configured": true, 00:24:43.149 "data_offset": 2048, 00:24:43.149 "data_size": 63488 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "name": "BaseBdev3", 00:24:43.149 "uuid": "d34abca0-c306-4fcb-97a1-a0987792f947", 00:24:43.149 "is_configured": true, 00:24:43.149 "data_offset": 2048, 00:24:43.149 "data_size": 63488 00:24:43.149 }, 00:24:43.149 { 00:24:43.149 "name": "BaseBdev4", 00:24:43.149 "uuid": "ec581ad6-6158-4e9d-b5ef-ffa704975943", 00:24:43.149 "is_configured": true, 00:24:43.149 "data_offset": 2048, 00:24:43.149 "data_size": 63488 00:24:43.149 } 00:24:43.149 ] 00:24:43.149 } 00:24:43.149 } 00:24:43.149 }' 00:24:43.149 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:43.407 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:43.407 BaseBdev2 00:24:43.407 BaseBdev3 00:24:43.407 BaseBdev4' 00:24:43.407 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:43.407 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:43.407 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:43.665 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:43.665 "name": "BaseBdev1", 00:24:43.665 "aliases": [ 00:24:43.665 "30dd3593-98de-47c5-9f35-336bee32a872" 00:24:43.665 ], 00:24:43.665 "product_name": "Malloc disk", 00:24:43.665 "block_size": 512, 00:24:43.665 "num_blocks": 65536, 00:24:43.665 "uuid": "30dd3593-98de-47c5-9f35-336bee32a872", 00:24:43.665 "assigned_rate_limits": { 00:24:43.665 "rw_ios_per_sec": 0, 00:24:43.665 "rw_mbytes_per_sec": 0, 00:24:43.665 "r_mbytes_per_sec": 0, 00:24:43.665 "w_mbytes_per_sec": 0 00:24:43.665 }, 00:24:43.665 "claimed": true, 00:24:43.665 "claim_type": "exclusive_write", 00:24:43.665 "zoned": false, 00:24:43.665 "supported_io_types": { 00:24:43.665 "read": true, 00:24:43.665 "write": true, 00:24:43.665 "unmap": true, 00:24:43.665 "flush": true, 00:24:43.665 "reset": true, 00:24:43.665 "nvme_admin": false, 00:24:43.665 "nvme_io": false, 00:24:43.665 "nvme_io_md": false, 00:24:43.665 "write_zeroes": true, 00:24:43.665 "zcopy": true, 00:24:43.665 "get_zone_info": false, 00:24:43.665 "zone_management": false, 00:24:43.666 "zone_append": false, 00:24:43.666 "compare": false, 00:24:43.666 "compare_and_write": false, 00:24:43.666 "abort": true, 00:24:43.666 "seek_hole": false, 00:24:43.666 "seek_data": false, 00:24:43.666 "copy": true, 00:24:43.666 "nvme_iov_md": false 00:24:43.666 }, 00:24:43.666 "memory_domains": [ 00:24:43.666 { 00:24:43.666 "dma_device_id": "system", 00:24:43.666 "dma_device_type": 1 00:24:43.666 }, 00:24:43.666 { 00:24:43.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.666 "dma_device_type": 2 00:24:43.666 } 00:24:43.666 ], 00:24:43.666 "driver_specific": {} 00:24:43.666 }' 00:24:43.666 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:43.666 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:43.666 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:43.666 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:43.666 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:43.666 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:43.666 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:43.924 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:43.924 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:43.924 08:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:43.924 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:43.924 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:43.924 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:43.924 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:43.924 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:44.182 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:44.182 "name": "BaseBdev2", 00:24:44.182 "aliases": [ 00:24:44.182 "e397b2bc-5f5e-4c1e-96b0-71585b0b8500" 00:24:44.182 ], 00:24:44.182 "product_name": "Malloc disk", 00:24:44.182 "block_size": 512, 00:24:44.182 "num_blocks": 65536, 00:24:44.182 "uuid": "e397b2bc-5f5e-4c1e-96b0-71585b0b8500", 00:24:44.182 "assigned_rate_limits": { 00:24:44.182 "rw_ios_per_sec": 0, 00:24:44.182 "rw_mbytes_per_sec": 0, 00:24:44.182 "r_mbytes_per_sec": 0, 00:24:44.182 "w_mbytes_per_sec": 0 00:24:44.182 }, 00:24:44.182 "claimed": true, 00:24:44.182 "claim_type": "exclusive_write", 00:24:44.182 "zoned": false, 00:24:44.182 "supported_io_types": { 00:24:44.182 "read": true, 00:24:44.182 "write": true, 00:24:44.182 "unmap": true, 00:24:44.182 "flush": true, 00:24:44.182 "reset": true, 00:24:44.182 "nvme_admin": false, 00:24:44.182 "nvme_io": false, 00:24:44.182 "nvme_io_md": false, 00:24:44.182 "write_zeroes": true, 00:24:44.182 "zcopy": true, 00:24:44.182 "get_zone_info": false, 00:24:44.182 "zone_management": false, 00:24:44.182 "zone_append": false, 00:24:44.182 "compare": false, 00:24:44.182 "compare_and_write": false, 00:24:44.182 "abort": true, 00:24:44.182 "seek_hole": false, 00:24:44.182 "seek_data": false, 00:24:44.182 "copy": true, 00:24:44.182 "nvme_iov_md": false 00:24:44.182 }, 00:24:44.182 "memory_domains": [ 00:24:44.182 { 00:24:44.182 "dma_device_id": "system", 00:24:44.182 "dma_device_type": 1 00:24:44.182 }, 00:24:44.182 { 00:24:44.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.182 "dma_device_type": 2 00:24:44.182 } 00:24:44.182 ], 00:24:44.182 "driver_specific": {} 00:24:44.182 }' 00:24:44.182 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.441 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.441 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:44.441 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.441 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.441 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:44.441 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.441 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.699 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:44.699 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:44.699 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:44.699 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:44.699 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:44.699 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:44.699 08:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:44.957 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:44.957 "name": "BaseBdev3", 00:24:44.957 "aliases": [ 00:24:44.957 "d34abca0-c306-4fcb-97a1-a0987792f947" 00:24:44.957 ], 00:24:44.957 "product_name": "Malloc disk", 00:24:44.957 "block_size": 512, 00:24:44.957 "num_blocks": 65536, 00:24:44.957 "uuid": "d34abca0-c306-4fcb-97a1-a0987792f947", 00:24:44.957 "assigned_rate_limits": { 00:24:44.957 "rw_ios_per_sec": 0, 00:24:44.957 "rw_mbytes_per_sec": 0, 00:24:44.957 "r_mbytes_per_sec": 0, 00:24:44.957 "w_mbytes_per_sec": 0 00:24:44.957 }, 00:24:44.957 "claimed": true, 00:24:44.957 "claim_type": "exclusive_write", 00:24:44.957 "zoned": false, 00:24:44.957 "supported_io_types": { 00:24:44.957 "read": true, 00:24:44.957 "write": true, 00:24:44.957 "unmap": true, 00:24:44.957 "flush": true, 00:24:44.957 "reset": true, 00:24:44.957 "nvme_admin": false, 00:24:44.957 "nvme_io": false, 00:24:44.957 "nvme_io_md": false, 00:24:44.957 "write_zeroes": true, 00:24:44.957 "zcopy": true, 00:24:44.957 "get_zone_info": false, 00:24:44.957 "zone_management": false, 00:24:44.957 "zone_append": false, 00:24:44.957 "compare": false, 00:24:44.957 "compare_and_write": false, 00:24:44.957 "abort": true, 00:24:44.957 "seek_hole": false, 00:24:44.957 "seek_data": false, 00:24:44.957 "copy": true, 00:24:44.957 "nvme_iov_md": false 00:24:44.957 }, 00:24:44.957 "memory_domains": [ 00:24:44.957 { 00:24:44.957 "dma_device_id": "system", 00:24:44.957 "dma_device_type": 1 00:24:44.957 }, 00:24:44.957 { 00:24:44.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.957 "dma_device_type": 2 00:24:44.957 } 00:24:44.957 ], 00:24:44.957 "driver_specific": {} 00:24:44.957 }' 00:24:44.957 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.957 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.215 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:45.215 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.215 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.215 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:45.215 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.215 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.216 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:45.216 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.474 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.474 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:45.474 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:45.474 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:45.474 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:45.732 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:45.732 "name": "BaseBdev4", 00:24:45.732 "aliases": [ 00:24:45.732 "ec581ad6-6158-4e9d-b5ef-ffa704975943" 00:24:45.732 ], 00:24:45.733 "product_name": "Malloc disk", 00:24:45.733 "block_size": 512, 00:24:45.733 "num_blocks": 65536, 00:24:45.733 "uuid": "ec581ad6-6158-4e9d-b5ef-ffa704975943", 00:24:45.733 "assigned_rate_limits": { 00:24:45.733 "rw_ios_per_sec": 0, 00:24:45.733 "rw_mbytes_per_sec": 0, 00:24:45.733 "r_mbytes_per_sec": 0, 00:24:45.733 "w_mbytes_per_sec": 0 00:24:45.733 }, 00:24:45.733 "claimed": true, 00:24:45.733 "claim_type": "exclusive_write", 00:24:45.733 "zoned": false, 00:24:45.733 "supported_io_types": { 00:24:45.733 "read": true, 00:24:45.733 "write": true, 00:24:45.733 "unmap": true, 00:24:45.733 "flush": true, 00:24:45.733 "reset": true, 00:24:45.733 "nvme_admin": false, 00:24:45.733 "nvme_io": false, 00:24:45.733 "nvme_io_md": false, 00:24:45.733 "write_zeroes": true, 00:24:45.733 "zcopy": true, 00:24:45.733 "get_zone_info": false, 00:24:45.733 "zone_management": false, 00:24:45.733 "zone_append": false, 00:24:45.733 "compare": false, 00:24:45.733 "compare_and_write": false, 00:24:45.733 "abort": true, 00:24:45.733 "seek_hole": false, 00:24:45.733 "seek_data": false, 00:24:45.733 "copy": true, 00:24:45.733 "nvme_iov_md": false 00:24:45.733 }, 00:24:45.733 "memory_domains": [ 00:24:45.733 { 00:24:45.733 "dma_device_id": "system", 00:24:45.733 "dma_device_type": 1 00:24:45.733 }, 00:24:45.733 { 00:24:45.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.733 "dma_device_type": 2 00:24:45.733 } 00:24:45.733 ], 00:24:45.733 "driver_specific": {} 00:24:45.733 }' 00:24:45.733 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.733 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.733 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:45.733 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.733 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.991 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:45.991 08:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.991 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.991 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:45.991 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.991 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.249 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.249 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:46.249 [2024-07-12 08:51:21.429866] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:46.249 [2024-07-12 08:51:21.429904] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.249 [2024-07-12 08:51:21.429983] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.507 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:46.764 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.764 "name": "Existed_Raid", 00:24:46.764 "uuid": "7d4b23e0-3767-48f8-ab16-8abd4b672c13", 00:24:46.764 "strip_size_kb": 64, 00:24:46.764 "state": "offline", 00:24:46.764 "raid_level": "raid0", 00:24:46.764 "superblock": true, 00:24:46.764 "num_base_bdevs": 4, 00:24:46.764 "num_base_bdevs_discovered": 3, 00:24:46.764 "num_base_bdevs_operational": 3, 00:24:46.764 "base_bdevs_list": [ 00:24:46.764 { 00:24:46.764 "name": null, 00:24:46.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.764 "is_configured": false, 00:24:46.764 "data_offset": 2048, 00:24:46.764 "data_size": 63488 00:24:46.764 }, 00:24:46.764 { 00:24:46.764 "name": "BaseBdev2", 00:24:46.764 "uuid": "e397b2bc-5f5e-4c1e-96b0-71585b0b8500", 00:24:46.764 "is_configured": true, 00:24:46.764 "data_offset": 2048, 00:24:46.764 "data_size": 63488 00:24:46.764 }, 00:24:46.764 { 00:24:46.764 "name": "BaseBdev3", 00:24:46.764 "uuid": "d34abca0-c306-4fcb-97a1-a0987792f947", 00:24:46.764 "is_configured": true, 00:24:46.764 "data_offset": 2048, 00:24:46.764 "data_size": 63488 00:24:46.764 }, 00:24:46.764 { 00:24:46.764 "name": "BaseBdev4", 00:24:46.764 "uuid": "ec581ad6-6158-4e9d-b5ef-ffa704975943", 00:24:46.764 "is_configured": true, 00:24:46.764 "data_offset": 2048, 00:24:46.764 "data_size": 63488 00:24:46.764 } 00:24:46.764 ] 00:24:46.764 }' 00:24:46.764 08:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.764 08:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.330 08:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:47.330 08:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:47.330 08:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.330 08:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:47.589 08:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:47.589 08:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:47.589 08:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:47.847 [2024-07-12 08:51:22.976491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:48.104 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:48.104 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:48.104 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.104 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:48.361 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:48.361 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:48.361 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:48.620 [2024-07-12 08:51:23.563502] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:48.620 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:48.620 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:48.620 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.620 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:48.878 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:48.878 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:48.878 08:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:49.139 [2024-07-12 08:51:24.134593] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:49.139 [2024-07-12 08:51:24.134686] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:24:49.139 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:49.139 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:49.139 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.139 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:49.397 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:49.397 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:49.397 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:49.397 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:49.397 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:49.397 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:49.655 BaseBdev2 00:24:49.655 08:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:49.655 08:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:49.655 08:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:49.655 08:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:49.655 08:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:49.655 08:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:49.655 08:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:49.912 08:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:49.912 [ 00:24:49.912 { 00:24:49.912 "name": "BaseBdev2", 00:24:49.912 "aliases": [ 00:24:49.912 "9e38be96-5fbd-4999-a338-589eaf199bef" 00:24:49.912 ], 00:24:49.912 "product_name": "Malloc disk", 00:24:49.912 "block_size": 512, 00:24:49.912 "num_blocks": 65536, 00:24:49.912 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:24:49.912 "assigned_rate_limits": { 00:24:49.912 "rw_ios_per_sec": 0, 00:24:49.912 "rw_mbytes_per_sec": 0, 00:24:49.912 "r_mbytes_per_sec": 0, 00:24:49.912 "w_mbytes_per_sec": 0 00:24:49.912 }, 00:24:49.912 "claimed": false, 00:24:49.912 "zoned": false, 00:24:49.912 "supported_io_types": { 00:24:49.912 "read": true, 00:24:49.912 "write": true, 00:24:49.912 "unmap": true, 00:24:49.912 "flush": true, 00:24:49.912 "reset": true, 00:24:49.912 "nvme_admin": false, 00:24:49.912 "nvme_io": false, 00:24:49.912 "nvme_io_md": false, 00:24:49.912 "write_zeroes": true, 00:24:49.912 "zcopy": true, 00:24:49.912 "get_zone_info": false, 00:24:49.912 "zone_management": false, 00:24:49.912 "zone_append": false, 00:24:49.912 "compare": false, 00:24:49.912 "compare_and_write": false, 00:24:49.912 "abort": true, 00:24:49.912 "seek_hole": false, 00:24:49.912 "seek_data": false, 00:24:49.912 "copy": true, 00:24:49.912 "nvme_iov_md": false 00:24:49.912 }, 00:24:49.912 "memory_domains": [ 00:24:49.912 { 00:24:49.912 "dma_device_id": "system", 00:24:49.912 "dma_device_type": 1 00:24:49.912 }, 00:24:49.912 { 00:24:49.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.912 "dma_device_type": 2 00:24:49.912 } 00:24:49.912 ], 00:24:49.912 "driver_specific": {} 00:24:49.912 } 00:24:49.912 ] 00:24:49.912 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:49.912 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:49.912 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:49.912 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:50.170 BaseBdev3 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:50.428 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:50.686 [ 00:24:50.686 { 00:24:50.686 "name": "BaseBdev3", 00:24:50.686 "aliases": [ 00:24:50.686 "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb" 00:24:50.686 ], 00:24:50.686 "product_name": "Malloc disk", 00:24:50.686 "block_size": 512, 00:24:50.686 "num_blocks": 65536, 00:24:50.686 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:24:50.686 "assigned_rate_limits": { 00:24:50.686 "rw_ios_per_sec": 0, 00:24:50.686 "rw_mbytes_per_sec": 0, 00:24:50.686 "r_mbytes_per_sec": 0, 00:24:50.686 "w_mbytes_per_sec": 0 00:24:50.686 }, 00:24:50.686 "claimed": false, 00:24:50.686 "zoned": false, 00:24:50.686 "supported_io_types": { 00:24:50.686 "read": true, 00:24:50.686 "write": true, 00:24:50.686 "unmap": true, 00:24:50.686 "flush": true, 00:24:50.686 "reset": true, 00:24:50.686 "nvme_admin": false, 00:24:50.686 "nvme_io": false, 00:24:50.686 "nvme_io_md": false, 00:24:50.687 "write_zeroes": true, 00:24:50.687 "zcopy": true, 00:24:50.687 "get_zone_info": false, 00:24:50.687 "zone_management": false, 00:24:50.687 "zone_append": false, 00:24:50.687 "compare": false, 00:24:50.687 "compare_and_write": false, 00:24:50.687 "abort": true, 00:24:50.687 "seek_hole": false, 00:24:50.687 "seek_data": false, 00:24:50.687 "copy": true, 00:24:50.687 "nvme_iov_md": false 00:24:50.687 }, 00:24:50.687 "memory_domains": [ 00:24:50.687 { 00:24:50.687 "dma_device_id": "system", 00:24:50.687 "dma_device_type": 1 00:24:50.687 }, 00:24:50.687 { 00:24:50.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.687 "dma_device_type": 2 00:24:50.687 } 00:24:50.687 ], 00:24:50.687 "driver_specific": {} 00:24:50.687 } 00:24:50.687 ] 00:24:50.687 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:50.687 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:50.687 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:50.687 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:50.961 BaseBdev4 00:24:50.962 08:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:50.962 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:50.962 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:50.962 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:50.962 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:50.962 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:50.962 08:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:51.235 08:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:51.235 [ 00:24:51.235 { 00:24:51.235 "name": "BaseBdev4", 00:24:51.235 "aliases": [ 00:24:51.235 "c359fcd8-8116-4f46-9352-bb004be2c059" 00:24:51.235 ], 00:24:51.235 "product_name": "Malloc disk", 00:24:51.235 "block_size": 512, 00:24:51.235 "num_blocks": 65536, 00:24:51.235 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:24:51.235 "assigned_rate_limits": { 00:24:51.235 "rw_ios_per_sec": 0, 00:24:51.235 "rw_mbytes_per_sec": 0, 00:24:51.235 "r_mbytes_per_sec": 0, 00:24:51.235 "w_mbytes_per_sec": 0 00:24:51.235 }, 00:24:51.235 "claimed": false, 00:24:51.235 "zoned": false, 00:24:51.235 "supported_io_types": { 00:24:51.235 "read": true, 00:24:51.235 "write": true, 00:24:51.235 "unmap": true, 00:24:51.235 "flush": true, 00:24:51.235 "reset": true, 00:24:51.235 "nvme_admin": false, 00:24:51.235 "nvme_io": false, 00:24:51.235 "nvme_io_md": false, 00:24:51.235 "write_zeroes": true, 00:24:51.235 "zcopy": true, 00:24:51.235 "get_zone_info": false, 00:24:51.235 "zone_management": false, 00:24:51.235 "zone_append": false, 00:24:51.235 "compare": false, 00:24:51.235 "compare_and_write": false, 00:24:51.235 "abort": true, 00:24:51.235 "seek_hole": false, 00:24:51.235 "seek_data": false, 00:24:51.235 "copy": true, 00:24:51.235 "nvme_iov_md": false 00:24:51.235 }, 00:24:51.235 "memory_domains": [ 00:24:51.235 { 00:24:51.235 "dma_device_id": "system", 00:24:51.235 "dma_device_type": 1 00:24:51.235 }, 00:24:51.235 { 00:24:51.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.235 "dma_device_type": 2 00:24:51.235 } 00:24:51.235 ], 00:24:51.235 "driver_specific": {} 00:24:51.235 } 00:24:51.235 ] 00:24:51.235 08:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:51.235 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:51.235 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:51.235 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:51.497 [2024-07-12 08:51:26.636692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:51.497 [2024-07-12 08:51:26.636809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:51.497 [2024-07-12 08:51:26.636853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:51.497 [2024-07-12 08:51:26.638831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:51.497 [2024-07-12 08:51:26.638913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.497 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.755 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.755 "name": "Existed_Raid", 00:24:51.755 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:24:51.755 "strip_size_kb": 64, 00:24:51.755 "state": "configuring", 00:24:51.755 "raid_level": "raid0", 00:24:51.755 "superblock": true, 00:24:51.755 "num_base_bdevs": 4, 00:24:51.755 "num_base_bdevs_discovered": 3, 00:24:51.755 "num_base_bdevs_operational": 4, 00:24:51.755 "base_bdevs_list": [ 00:24:51.755 { 00:24:51.755 "name": "BaseBdev1", 00:24:51.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.755 "is_configured": false, 00:24:51.755 "data_offset": 0, 00:24:51.755 "data_size": 0 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "name": "BaseBdev2", 00:24:51.755 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:24:51.755 "is_configured": true, 00:24:51.755 "data_offset": 2048, 00:24:51.755 "data_size": 63488 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "name": "BaseBdev3", 00:24:51.755 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:24:51.755 "is_configured": true, 00:24:51.755 "data_offset": 2048, 00:24:51.755 "data_size": 63488 00:24:51.755 }, 00:24:51.755 { 00:24:51.755 "name": "BaseBdev4", 00:24:51.755 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:24:51.755 "is_configured": true, 00:24:51.755 "data_offset": 2048, 00:24:51.755 "data_size": 63488 00:24:51.755 } 00:24:51.755 ] 00:24:51.755 }' 00:24:51.755 08:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.755 08:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:52.689 [2024-07-12 08:51:27.848983] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.689 08:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.947 08:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.947 "name": "Existed_Raid", 00:24:52.947 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:24:52.947 "strip_size_kb": 64, 00:24:52.947 "state": "configuring", 00:24:52.947 "raid_level": "raid0", 00:24:52.947 "superblock": true, 00:24:52.947 "num_base_bdevs": 4, 00:24:52.947 "num_base_bdevs_discovered": 2, 00:24:52.947 "num_base_bdevs_operational": 4, 00:24:52.947 "base_bdevs_list": [ 00:24:52.947 { 00:24:52.947 "name": "BaseBdev1", 00:24:52.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.947 "is_configured": false, 00:24:52.947 "data_offset": 0, 00:24:52.947 "data_size": 0 00:24:52.947 }, 00:24:52.947 { 00:24:52.947 "name": null, 00:24:52.947 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:24:52.947 "is_configured": false, 00:24:52.947 "data_offset": 2048, 00:24:52.947 "data_size": 63488 00:24:52.947 }, 00:24:52.947 { 00:24:52.947 "name": "BaseBdev3", 00:24:52.947 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:24:52.947 "is_configured": true, 00:24:52.947 "data_offset": 2048, 00:24:52.947 "data_size": 63488 00:24:52.947 }, 00:24:52.947 { 00:24:52.947 "name": "BaseBdev4", 00:24:52.947 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:24:52.947 "is_configured": true, 00:24:52.947 "data_offset": 2048, 00:24:52.947 "data_size": 63488 00:24:52.947 } 00:24:52.947 ] 00:24:52.947 }' 00:24:52.947 08:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.947 08:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 08:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.883 08:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:53.883 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:53.883 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:54.140 [2024-07-12 08:51:29.283872] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.140 BaseBdev1 00:24:54.140 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:54.140 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:54.140 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:54.140 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:54.140 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:54.140 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:54.140 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:54.398 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:54.655 [ 00:24:54.655 { 00:24:54.655 "name": "BaseBdev1", 00:24:54.655 "aliases": [ 00:24:54.655 "be961193-37a1-4ca5-81a1-386f20b7f3f5" 00:24:54.655 ], 00:24:54.655 "product_name": "Malloc disk", 00:24:54.655 "block_size": 512, 00:24:54.655 "num_blocks": 65536, 00:24:54.655 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:24:54.655 "assigned_rate_limits": { 00:24:54.655 "rw_ios_per_sec": 0, 00:24:54.655 "rw_mbytes_per_sec": 0, 00:24:54.655 "r_mbytes_per_sec": 0, 00:24:54.655 "w_mbytes_per_sec": 0 00:24:54.655 }, 00:24:54.655 "claimed": true, 00:24:54.655 "claim_type": "exclusive_write", 00:24:54.655 "zoned": false, 00:24:54.655 "supported_io_types": { 00:24:54.655 "read": true, 00:24:54.655 "write": true, 00:24:54.655 "unmap": true, 00:24:54.655 "flush": true, 00:24:54.655 "reset": true, 00:24:54.655 "nvme_admin": false, 00:24:54.655 "nvme_io": false, 00:24:54.655 "nvme_io_md": false, 00:24:54.655 "write_zeroes": true, 00:24:54.655 "zcopy": true, 00:24:54.655 "get_zone_info": false, 00:24:54.655 "zone_management": false, 00:24:54.655 "zone_append": false, 00:24:54.655 "compare": false, 00:24:54.655 "compare_and_write": false, 00:24:54.655 "abort": true, 00:24:54.655 "seek_hole": false, 00:24:54.655 "seek_data": false, 00:24:54.655 "copy": true, 00:24:54.655 "nvme_iov_md": false 00:24:54.655 }, 00:24:54.655 "memory_domains": [ 00:24:54.655 { 00:24:54.655 "dma_device_id": "system", 00:24:54.655 "dma_device_type": 1 00:24:54.655 }, 00:24:54.655 { 00:24:54.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.655 "dma_device_type": 2 00:24:54.655 } 00:24:54.655 ], 00:24:54.655 "driver_specific": {} 00:24:54.655 } 00:24:54.655 ] 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.655 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.912 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.913 "name": "Existed_Raid", 00:24:54.913 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:24:54.913 "strip_size_kb": 64, 00:24:54.913 "state": "configuring", 00:24:54.913 "raid_level": "raid0", 00:24:54.913 "superblock": true, 00:24:54.913 "num_base_bdevs": 4, 00:24:54.913 "num_base_bdevs_discovered": 3, 00:24:54.913 "num_base_bdevs_operational": 4, 00:24:54.913 "base_bdevs_list": [ 00:24:54.913 { 00:24:54.913 "name": "BaseBdev1", 00:24:54.913 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:24:54.913 "is_configured": true, 00:24:54.913 "data_offset": 2048, 00:24:54.913 "data_size": 63488 00:24:54.913 }, 00:24:54.913 { 00:24:54.913 "name": null, 00:24:54.913 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:24:54.913 "is_configured": false, 00:24:54.913 "data_offset": 2048, 00:24:54.913 "data_size": 63488 00:24:54.913 }, 00:24:54.913 { 00:24:54.913 "name": "BaseBdev3", 00:24:54.913 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:24:54.913 "is_configured": true, 00:24:54.913 "data_offset": 2048, 00:24:54.913 "data_size": 63488 00:24:54.913 }, 00:24:54.913 { 00:24:54.913 "name": "BaseBdev4", 00:24:54.913 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:24:54.913 "is_configured": true, 00:24:54.913 "data_offset": 2048, 00:24:54.913 "data_size": 63488 00:24:54.913 } 00:24:54.913 ] 00:24:54.913 }' 00:24:54.913 08:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.913 08:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:55.479 08:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.479 08:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:55.738 08:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:55.738 08:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:55.996 [2024-07-12 08:51:31.016307] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.996 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.255 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.255 "name": "Existed_Raid", 00:24:56.255 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:24:56.255 "strip_size_kb": 64, 00:24:56.255 "state": "configuring", 00:24:56.255 "raid_level": "raid0", 00:24:56.255 "superblock": true, 00:24:56.255 "num_base_bdevs": 4, 00:24:56.255 "num_base_bdevs_discovered": 2, 00:24:56.255 "num_base_bdevs_operational": 4, 00:24:56.255 "base_bdevs_list": [ 00:24:56.255 { 00:24:56.255 "name": "BaseBdev1", 00:24:56.255 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:24:56.255 "is_configured": true, 00:24:56.255 "data_offset": 2048, 00:24:56.255 "data_size": 63488 00:24:56.255 }, 00:24:56.255 { 00:24:56.255 "name": null, 00:24:56.255 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:24:56.255 "is_configured": false, 00:24:56.255 "data_offset": 2048, 00:24:56.255 "data_size": 63488 00:24:56.255 }, 00:24:56.255 { 00:24:56.255 "name": null, 00:24:56.255 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:24:56.255 "is_configured": false, 00:24:56.255 "data_offset": 2048, 00:24:56.255 "data_size": 63488 00:24:56.255 }, 00:24:56.255 { 00:24:56.255 "name": "BaseBdev4", 00:24:56.255 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:24:56.255 "is_configured": true, 00:24:56.255 "data_offset": 2048, 00:24:56.255 "data_size": 63488 00:24:56.255 } 00:24:56.255 ] 00:24:56.255 }' 00:24:56.255 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.255 08:51:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:56.821 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.821 08:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:57.079 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:57.079 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:57.338 [2024-07-12 08:51:32.384819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.338 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.595 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.595 "name": "Existed_Raid", 00:24:57.595 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:24:57.595 "strip_size_kb": 64, 00:24:57.595 "state": "configuring", 00:24:57.595 "raid_level": "raid0", 00:24:57.595 "superblock": true, 00:24:57.595 "num_base_bdevs": 4, 00:24:57.595 "num_base_bdevs_discovered": 3, 00:24:57.595 "num_base_bdevs_operational": 4, 00:24:57.595 "base_bdevs_list": [ 00:24:57.595 { 00:24:57.595 "name": "BaseBdev1", 00:24:57.595 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:24:57.595 "is_configured": true, 00:24:57.595 "data_offset": 2048, 00:24:57.595 "data_size": 63488 00:24:57.595 }, 00:24:57.595 { 00:24:57.595 "name": null, 00:24:57.595 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:24:57.595 "is_configured": false, 00:24:57.595 "data_offset": 2048, 00:24:57.595 "data_size": 63488 00:24:57.595 }, 00:24:57.595 { 00:24:57.595 "name": "BaseBdev3", 00:24:57.595 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:24:57.595 "is_configured": true, 00:24:57.595 "data_offset": 2048, 00:24:57.595 "data_size": 63488 00:24:57.595 }, 00:24:57.595 { 00:24:57.595 "name": "BaseBdev4", 00:24:57.595 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:24:57.595 "is_configured": true, 00:24:57.595 "data_offset": 2048, 00:24:57.595 "data_size": 63488 00:24:57.595 } 00:24:57.595 ] 00:24:57.595 }' 00:24:57.595 08:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.595 08:51:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:58.160 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.160 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:58.418 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:58.418 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:58.675 [2024-07-12 08:51:33.821173] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.934 08:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.191 08:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.191 "name": "Existed_Raid", 00:24:59.191 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:24:59.191 "strip_size_kb": 64, 00:24:59.191 "state": "configuring", 00:24:59.191 "raid_level": "raid0", 00:24:59.191 "superblock": true, 00:24:59.191 "num_base_bdevs": 4, 00:24:59.191 "num_base_bdevs_discovered": 2, 00:24:59.191 "num_base_bdevs_operational": 4, 00:24:59.191 "base_bdevs_list": [ 00:24:59.191 { 00:24:59.191 "name": null, 00:24:59.191 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:24:59.191 "is_configured": false, 00:24:59.191 "data_offset": 2048, 00:24:59.191 "data_size": 63488 00:24:59.191 }, 00:24:59.191 { 00:24:59.191 "name": null, 00:24:59.191 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:24:59.191 "is_configured": false, 00:24:59.191 "data_offset": 2048, 00:24:59.191 "data_size": 63488 00:24:59.191 }, 00:24:59.191 { 00:24:59.191 "name": "BaseBdev3", 00:24:59.191 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:24:59.191 "is_configured": true, 00:24:59.191 "data_offset": 2048, 00:24:59.191 "data_size": 63488 00:24:59.191 }, 00:24:59.191 { 00:24:59.191 "name": "BaseBdev4", 00:24:59.191 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:24:59.191 "is_configured": true, 00:24:59.191 "data_offset": 2048, 00:24:59.191 "data_size": 63488 00:24:59.191 } 00:24:59.191 ] 00:24:59.191 }' 00:24:59.191 08:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.191 08:51:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:59.755 08:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.756 08:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:00.012 08:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:00.012 08:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:00.012 [2024-07-12 08:51:35.194328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:00.270 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.529 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.529 "name": "Existed_Raid", 00:25:00.529 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:25:00.529 "strip_size_kb": 64, 00:25:00.529 "state": "configuring", 00:25:00.529 "raid_level": "raid0", 00:25:00.529 "superblock": true, 00:25:00.529 "num_base_bdevs": 4, 00:25:00.529 "num_base_bdevs_discovered": 3, 00:25:00.529 "num_base_bdevs_operational": 4, 00:25:00.529 "base_bdevs_list": [ 00:25:00.529 { 00:25:00.529 "name": null, 00:25:00.529 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:25:00.529 "is_configured": false, 00:25:00.529 "data_offset": 2048, 00:25:00.529 "data_size": 63488 00:25:00.529 }, 00:25:00.529 { 00:25:00.529 "name": "BaseBdev2", 00:25:00.529 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:25:00.529 "is_configured": true, 00:25:00.529 "data_offset": 2048, 00:25:00.529 "data_size": 63488 00:25:00.529 }, 00:25:00.529 { 00:25:00.529 "name": "BaseBdev3", 00:25:00.529 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:25:00.529 "is_configured": true, 00:25:00.529 "data_offset": 2048, 00:25:00.529 "data_size": 63488 00:25:00.529 }, 00:25:00.529 { 00:25:00.529 "name": "BaseBdev4", 00:25:00.529 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:25:00.529 "is_configured": true, 00:25:00.529 "data_offset": 2048, 00:25:00.529 "data_size": 63488 00:25:00.529 } 00:25:00.529 ] 00:25:00.529 }' 00:25:00.529 08:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.529 08:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.094 08:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.095 08:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:01.353 08:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:01.353 08:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:01.353 08:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.611 08:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u be961193-37a1-4ca5-81a1-386f20b7f3f5 00:25:01.869 [2024-07-12 08:51:36.909943] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:01.869 [2024-07-12 08:51:36.910439] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:01.869 [2024-07-12 08:51:36.910561] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:01.869 NewBaseBdev 00:25:01.869 [2024-07-12 08:51:36.910715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:01.869 [2024-07-12 08:51:36.911054] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:01.869 [2024-07-12 08:51:36.911206] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:25:01.869 [2024-07-12 08:51:36.911428] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.869 08:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:01.869 08:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:01.869 08:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:01.869 08:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:01.869 08:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:01.869 08:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:01.869 08:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:02.126 08:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:02.383 [ 00:25:02.383 { 00:25:02.383 "name": "NewBaseBdev", 00:25:02.383 "aliases": [ 00:25:02.383 "be961193-37a1-4ca5-81a1-386f20b7f3f5" 00:25:02.383 ], 00:25:02.383 "product_name": "Malloc disk", 00:25:02.383 "block_size": 512, 00:25:02.383 "num_blocks": 65536, 00:25:02.383 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:25:02.383 "assigned_rate_limits": { 00:25:02.383 "rw_ios_per_sec": 0, 00:25:02.383 "rw_mbytes_per_sec": 0, 00:25:02.383 "r_mbytes_per_sec": 0, 00:25:02.383 "w_mbytes_per_sec": 0 00:25:02.383 }, 00:25:02.383 "claimed": true, 00:25:02.383 "claim_type": "exclusive_write", 00:25:02.383 "zoned": false, 00:25:02.383 "supported_io_types": { 00:25:02.383 "read": true, 00:25:02.383 "write": true, 00:25:02.383 "unmap": true, 00:25:02.383 "flush": true, 00:25:02.383 "reset": true, 00:25:02.383 "nvme_admin": false, 00:25:02.383 "nvme_io": false, 00:25:02.383 "nvme_io_md": false, 00:25:02.383 "write_zeroes": true, 00:25:02.383 "zcopy": true, 00:25:02.383 "get_zone_info": false, 00:25:02.383 "zone_management": false, 00:25:02.383 "zone_append": false, 00:25:02.383 "compare": false, 00:25:02.383 "compare_and_write": false, 00:25:02.383 "abort": true, 00:25:02.383 "seek_hole": false, 00:25:02.383 "seek_data": false, 00:25:02.383 "copy": true, 00:25:02.383 "nvme_iov_md": false 00:25:02.383 }, 00:25:02.383 "memory_domains": [ 00:25:02.383 { 00:25:02.383 "dma_device_id": "system", 00:25:02.383 "dma_device_type": 1 00:25:02.383 }, 00:25:02.383 { 00:25:02.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.383 "dma_device_type": 2 00:25:02.383 } 00:25:02.383 ], 00:25:02.383 "driver_specific": {} 00:25:02.383 } 00:25:02.383 ] 00:25:02.383 08:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:02.383 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:25:02.383 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:02.383 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:02.383 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:02.383 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:02.383 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:02.384 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.384 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.384 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.384 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.384 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.384 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.642 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:02.642 "name": "Existed_Raid", 00:25:02.642 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:25:02.642 "strip_size_kb": 64, 00:25:02.642 "state": "online", 00:25:02.642 "raid_level": "raid0", 00:25:02.642 "superblock": true, 00:25:02.642 "num_base_bdevs": 4, 00:25:02.642 "num_base_bdevs_discovered": 4, 00:25:02.642 "num_base_bdevs_operational": 4, 00:25:02.642 "base_bdevs_list": [ 00:25:02.642 { 00:25:02.642 "name": "NewBaseBdev", 00:25:02.642 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:25:02.642 "is_configured": true, 00:25:02.642 "data_offset": 2048, 00:25:02.642 "data_size": 63488 00:25:02.642 }, 00:25:02.642 { 00:25:02.642 "name": "BaseBdev2", 00:25:02.642 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:25:02.642 "is_configured": true, 00:25:02.642 "data_offset": 2048, 00:25:02.642 "data_size": 63488 00:25:02.642 }, 00:25:02.642 { 00:25:02.642 "name": "BaseBdev3", 00:25:02.642 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:25:02.642 "is_configured": true, 00:25:02.642 "data_offset": 2048, 00:25:02.642 "data_size": 63488 00:25:02.642 }, 00:25:02.642 { 00:25:02.642 "name": "BaseBdev4", 00:25:02.642 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:25:02.642 "is_configured": true, 00:25:02.642 "data_offset": 2048, 00:25:02.642 "data_size": 63488 00:25:02.642 } 00:25:02.642 ] 00:25:02.642 }' 00:25:02.642 08:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:02.642 08:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:03.207 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:03.465 [2024-07-12 08:51:38.482738] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:03.465 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:03.465 "name": "Existed_Raid", 00:25:03.465 "aliases": [ 00:25:03.465 "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf" 00:25:03.465 ], 00:25:03.465 "product_name": "Raid Volume", 00:25:03.465 "block_size": 512, 00:25:03.465 "num_blocks": 253952, 00:25:03.465 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:25:03.465 "assigned_rate_limits": { 00:25:03.465 "rw_ios_per_sec": 0, 00:25:03.465 "rw_mbytes_per_sec": 0, 00:25:03.465 "r_mbytes_per_sec": 0, 00:25:03.465 "w_mbytes_per_sec": 0 00:25:03.465 }, 00:25:03.465 "claimed": false, 00:25:03.465 "zoned": false, 00:25:03.465 "supported_io_types": { 00:25:03.465 "read": true, 00:25:03.465 "write": true, 00:25:03.465 "unmap": true, 00:25:03.465 "flush": true, 00:25:03.465 "reset": true, 00:25:03.465 "nvme_admin": false, 00:25:03.465 "nvme_io": false, 00:25:03.465 "nvme_io_md": false, 00:25:03.465 "write_zeroes": true, 00:25:03.465 "zcopy": false, 00:25:03.465 "get_zone_info": false, 00:25:03.465 "zone_management": false, 00:25:03.465 "zone_append": false, 00:25:03.465 "compare": false, 00:25:03.465 "compare_and_write": false, 00:25:03.465 "abort": false, 00:25:03.465 "seek_hole": false, 00:25:03.465 "seek_data": false, 00:25:03.465 "copy": false, 00:25:03.465 "nvme_iov_md": false 00:25:03.465 }, 00:25:03.465 "memory_domains": [ 00:25:03.465 { 00:25:03.465 "dma_device_id": "system", 00:25:03.465 "dma_device_type": 1 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.465 "dma_device_type": 2 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "dma_device_id": "system", 00:25:03.465 "dma_device_type": 1 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.465 "dma_device_type": 2 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "dma_device_id": "system", 00:25:03.465 "dma_device_type": 1 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.465 "dma_device_type": 2 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "dma_device_id": "system", 00:25:03.465 "dma_device_type": 1 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.465 "dma_device_type": 2 00:25:03.465 } 00:25:03.465 ], 00:25:03.465 "driver_specific": { 00:25:03.465 "raid": { 00:25:03.465 "uuid": "620fb95e-73b6-41b5-ad2c-8e5d2b4f90cf", 00:25:03.465 "strip_size_kb": 64, 00:25:03.465 "state": "online", 00:25:03.465 "raid_level": "raid0", 00:25:03.465 "superblock": true, 00:25:03.465 "num_base_bdevs": 4, 00:25:03.465 "num_base_bdevs_discovered": 4, 00:25:03.465 "num_base_bdevs_operational": 4, 00:25:03.465 "base_bdevs_list": [ 00:25:03.465 { 00:25:03.465 "name": "NewBaseBdev", 00:25:03.465 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:25:03.465 "is_configured": true, 00:25:03.465 "data_offset": 2048, 00:25:03.465 "data_size": 63488 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "name": "BaseBdev2", 00:25:03.465 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:25:03.465 "is_configured": true, 00:25:03.465 "data_offset": 2048, 00:25:03.465 "data_size": 63488 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "name": "BaseBdev3", 00:25:03.465 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:25:03.465 "is_configured": true, 00:25:03.465 "data_offset": 2048, 00:25:03.465 "data_size": 63488 00:25:03.465 }, 00:25:03.465 { 00:25:03.465 "name": "BaseBdev4", 00:25:03.465 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:25:03.465 "is_configured": true, 00:25:03.465 "data_offset": 2048, 00:25:03.465 "data_size": 63488 00:25:03.465 } 00:25:03.465 ] 00:25:03.465 } 00:25:03.465 } 00:25:03.465 }' 00:25:03.465 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:03.465 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:03.465 BaseBdev2 00:25:03.465 BaseBdev3 00:25:03.465 BaseBdev4' 00:25:03.465 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:03.465 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:03.465 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:03.724 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:03.724 "name": "NewBaseBdev", 00:25:03.724 "aliases": [ 00:25:03.724 "be961193-37a1-4ca5-81a1-386f20b7f3f5" 00:25:03.724 ], 00:25:03.724 "product_name": "Malloc disk", 00:25:03.724 "block_size": 512, 00:25:03.724 "num_blocks": 65536, 00:25:03.724 "uuid": "be961193-37a1-4ca5-81a1-386f20b7f3f5", 00:25:03.724 "assigned_rate_limits": { 00:25:03.724 "rw_ios_per_sec": 0, 00:25:03.724 "rw_mbytes_per_sec": 0, 00:25:03.724 "r_mbytes_per_sec": 0, 00:25:03.724 "w_mbytes_per_sec": 0 00:25:03.724 }, 00:25:03.724 "claimed": true, 00:25:03.724 "claim_type": "exclusive_write", 00:25:03.724 "zoned": false, 00:25:03.724 "supported_io_types": { 00:25:03.724 "read": true, 00:25:03.724 "write": true, 00:25:03.724 "unmap": true, 00:25:03.724 "flush": true, 00:25:03.724 "reset": true, 00:25:03.724 "nvme_admin": false, 00:25:03.724 "nvme_io": false, 00:25:03.724 "nvme_io_md": false, 00:25:03.724 "write_zeroes": true, 00:25:03.724 "zcopy": true, 00:25:03.724 "get_zone_info": false, 00:25:03.724 "zone_management": false, 00:25:03.724 "zone_append": false, 00:25:03.724 "compare": false, 00:25:03.724 "compare_and_write": false, 00:25:03.724 "abort": true, 00:25:03.724 "seek_hole": false, 00:25:03.724 "seek_data": false, 00:25:03.724 "copy": true, 00:25:03.724 "nvme_iov_md": false 00:25:03.724 }, 00:25:03.724 "memory_domains": [ 00:25:03.724 { 00:25:03.724 "dma_device_id": "system", 00:25:03.724 "dma_device_type": 1 00:25:03.724 }, 00:25:03.724 { 00:25:03.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.724 "dma_device_type": 2 00:25:03.724 } 00:25:03.724 ], 00:25:03.724 "driver_specific": {} 00:25:03.724 }' 00:25:03.724 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.724 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.724 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:03.724 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.724 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.983 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:03.983 08:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.983 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.983 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.983 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.983 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.241 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:04.242 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:04.242 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:04.242 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:04.506 "name": "BaseBdev2", 00:25:04.506 "aliases": [ 00:25:04.506 "9e38be96-5fbd-4999-a338-589eaf199bef" 00:25:04.506 ], 00:25:04.506 "product_name": "Malloc disk", 00:25:04.506 "block_size": 512, 00:25:04.506 "num_blocks": 65536, 00:25:04.506 "uuid": "9e38be96-5fbd-4999-a338-589eaf199bef", 00:25:04.506 "assigned_rate_limits": { 00:25:04.506 "rw_ios_per_sec": 0, 00:25:04.506 "rw_mbytes_per_sec": 0, 00:25:04.506 "r_mbytes_per_sec": 0, 00:25:04.506 "w_mbytes_per_sec": 0 00:25:04.506 }, 00:25:04.506 "claimed": true, 00:25:04.506 "claim_type": "exclusive_write", 00:25:04.506 "zoned": false, 00:25:04.506 "supported_io_types": { 00:25:04.506 "read": true, 00:25:04.506 "write": true, 00:25:04.506 "unmap": true, 00:25:04.506 "flush": true, 00:25:04.506 "reset": true, 00:25:04.506 "nvme_admin": false, 00:25:04.506 "nvme_io": false, 00:25:04.506 "nvme_io_md": false, 00:25:04.506 "write_zeroes": true, 00:25:04.506 "zcopy": true, 00:25:04.506 "get_zone_info": false, 00:25:04.506 "zone_management": false, 00:25:04.506 "zone_append": false, 00:25:04.506 "compare": false, 00:25:04.506 "compare_and_write": false, 00:25:04.506 "abort": true, 00:25:04.506 "seek_hole": false, 00:25:04.506 "seek_data": false, 00:25:04.506 "copy": true, 00:25:04.506 "nvme_iov_md": false 00:25:04.506 }, 00:25:04.506 "memory_domains": [ 00:25:04.506 { 00:25:04.506 "dma_device_id": "system", 00:25:04.506 "dma_device_type": 1 00:25:04.506 }, 00:25:04.506 { 00:25:04.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.506 "dma_device_type": 2 00:25:04.506 } 00:25:04.506 ], 00:25:04.506 "driver_specific": {} 00:25:04.506 }' 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:04.506 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:04.764 08:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:05.023 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:05.023 "name": "BaseBdev3", 00:25:05.023 "aliases": [ 00:25:05.023 "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb" 00:25:05.023 ], 00:25:05.023 "product_name": "Malloc disk", 00:25:05.023 "block_size": 512, 00:25:05.023 "num_blocks": 65536, 00:25:05.023 "uuid": "98e83d38-b4b9-4ec1-abd4-6af5a6fdafbb", 00:25:05.023 "assigned_rate_limits": { 00:25:05.023 "rw_ios_per_sec": 0, 00:25:05.023 "rw_mbytes_per_sec": 0, 00:25:05.023 "r_mbytes_per_sec": 0, 00:25:05.023 "w_mbytes_per_sec": 0 00:25:05.023 }, 00:25:05.023 "claimed": true, 00:25:05.023 "claim_type": "exclusive_write", 00:25:05.023 "zoned": false, 00:25:05.023 "supported_io_types": { 00:25:05.023 "read": true, 00:25:05.023 "write": true, 00:25:05.023 "unmap": true, 00:25:05.023 "flush": true, 00:25:05.023 "reset": true, 00:25:05.023 "nvme_admin": false, 00:25:05.023 "nvme_io": false, 00:25:05.023 "nvme_io_md": false, 00:25:05.023 "write_zeroes": true, 00:25:05.023 "zcopy": true, 00:25:05.023 "get_zone_info": false, 00:25:05.023 "zone_management": false, 00:25:05.023 "zone_append": false, 00:25:05.023 "compare": false, 00:25:05.023 "compare_and_write": false, 00:25:05.023 "abort": true, 00:25:05.023 "seek_hole": false, 00:25:05.023 "seek_data": false, 00:25:05.023 "copy": true, 00:25:05.023 "nvme_iov_md": false 00:25:05.023 }, 00:25:05.023 "memory_domains": [ 00:25:05.023 { 00:25:05.023 "dma_device_id": "system", 00:25:05.023 "dma_device_type": 1 00:25:05.023 }, 00:25:05.023 { 00:25:05.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.023 "dma_device_type": 2 00:25:05.023 } 00:25:05.023 ], 00:25:05.023 "driver_specific": {} 00:25:05.023 }' 00:25:05.023 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.281 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.281 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:05.281 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.281 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.281 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:05.281 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:05.540 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:05.799 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:05.799 "name": "BaseBdev4", 00:25:05.799 "aliases": [ 00:25:05.799 "c359fcd8-8116-4f46-9352-bb004be2c059" 00:25:05.799 ], 00:25:05.799 "product_name": "Malloc disk", 00:25:05.799 "block_size": 512, 00:25:05.799 "num_blocks": 65536, 00:25:05.799 "uuid": "c359fcd8-8116-4f46-9352-bb004be2c059", 00:25:05.799 "assigned_rate_limits": { 00:25:05.799 "rw_ios_per_sec": 0, 00:25:05.799 "rw_mbytes_per_sec": 0, 00:25:05.799 "r_mbytes_per_sec": 0, 00:25:05.799 "w_mbytes_per_sec": 0 00:25:05.799 }, 00:25:05.799 "claimed": true, 00:25:05.799 "claim_type": "exclusive_write", 00:25:05.799 "zoned": false, 00:25:05.799 "supported_io_types": { 00:25:05.799 "read": true, 00:25:05.799 "write": true, 00:25:05.799 "unmap": true, 00:25:05.799 "flush": true, 00:25:05.799 "reset": true, 00:25:05.799 "nvme_admin": false, 00:25:05.799 "nvme_io": false, 00:25:05.799 "nvme_io_md": false, 00:25:05.799 "write_zeroes": true, 00:25:05.799 "zcopy": true, 00:25:05.799 "get_zone_info": false, 00:25:05.799 "zone_management": false, 00:25:05.799 "zone_append": false, 00:25:05.799 "compare": false, 00:25:05.799 "compare_and_write": false, 00:25:05.799 "abort": true, 00:25:05.799 "seek_hole": false, 00:25:05.799 "seek_data": false, 00:25:05.799 "copy": true, 00:25:05.799 "nvme_iov_md": false 00:25:05.799 }, 00:25:05.799 "memory_domains": [ 00:25:05.799 { 00:25:05.799 "dma_device_id": "system", 00:25:05.799 "dma_device_type": 1 00:25:05.799 }, 00:25:05.799 { 00:25:05.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.799 "dma_device_type": 2 00:25:05.799 } 00:25:05.799 ], 00:25:05.799 "driver_specific": {} 00:25:05.799 }' 00:25:05.799 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.799 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.799 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:05.799 08:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.069 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:06.069 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:06.069 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.069 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:06.069 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:06.069 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.069 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:06.381 [2024-07-12 08:51:41.479373] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:06.381 [2024-07-12 08:51:41.479728] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:06.381 [2024-07-12 08:51:41.479971] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:06.381 [2024-07-12 08:51:41.480210] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:06.381 [2024-07-12 08:51:41.480427] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 136790 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 136790 ']' 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 136790 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136790 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136790' 00:25:06.381 killing process with pid 136790 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 136790 00:25:06.381 08:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 136790 00:25:06.381 [2024-07-12 08:51:41.516717] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:06.648 [2024-07-12 08:51:41.813311] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:08.023 ************************************ 00:25:08.023 END TEST raid_state_function_test_sb 00:25:08.023 ************************************ 00:25:08.023 08:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:08.023 00:25:08.023 real 0m35.320s 00:25:08.023 user 1m6.579s 00:25:08.023 sys 0m3.648s 00:25:08.023 08:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.023 08:51:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.023 08:51:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:08.023 08:51:42 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:25:08.023 08:51:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:08.023 08:51:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.023 08:51:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:08.023 ************************************ 00:25:08.023 START TEST raid_superblock_test 00:25:08.023 ************************************ 00:25:08.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:08.023 08:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:25:08.023 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:25:08.023 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:08.023 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:08.023 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:08.023 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:08.023 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=137972 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 137972 /var/tmp/spdk-raid.sock 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 137972 ']' 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.024 08:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.024 [2024-07-12 08:51:42.972908] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:25:08.024 [2024-07-12 08:51:42.973364] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137972 ] 00:25:08.024 [2024-07-12 08:51:43.131083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.281 [2024-07-12 08:51:43.372794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.539 [2024-07-12 08:51:43.550195] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:08.806 08:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:09.064 malloc1 00:25:09.064 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:09.323 [2024-07-12 08:51:44.421242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:09.323 [2024-07-12 08:51:44.421577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.323 [2024-07-12 08:51:44.421738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:09.323 [2024-07-12 08:51:44.421890] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.323 [2024-07-12 08:51:44.424455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.323 [2024-07-12 08:51:44.424667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:09.323 pt1 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:09.323 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:09.582 malloc2 00:25:09.582 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:09.840 [2024-07-12 08:51:44.932234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:09.840 [2024-07-12 08:51:44.932651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.840 [2024-07-12 08:51:44.932855] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:25:09.840 [2024-07-12 08:51:44.933024] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.841 [2024-07-12 08:51:44.935452] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.841 [2024-07-12 08:51:44.935709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:09.841 pt2 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:09.841 08:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:10.099 malloc3 00:25:10.099 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:10.358 [2024-07-12 08:51:45.401634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:10.358 [2024-07-12 08:51:45.401940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.358 [2024-07-12 08:51:45.402100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:25:10.358 [2024-07-12 08:51:45.402244] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.358 [2024-07-12 08:51:45.404567] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.358 [2024-07-12 08:51:45.404753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:10.358 pt3 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:10.358 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:10.617 malloc4 00:25:10.617 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:10.875 [2024-07-12 08:51:45.837835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:10.875 [2024-07-12 08:51:45.838102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.875 [2024-07-12 08:51:45.838254] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:10.875 [2024-07-12 08:51:45.838386] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.875 [2024-07-12 08:51:45.840444] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.875 [2024-07-12 08:51:45.840693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:10.875 pt4 00:25:10.875 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:10.875 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:10.875 08:51:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:10.875 [2024-07-12 08:51:46.021879] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:10.875 [2024-07-12 08:51:46.023525] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:10.875 [2024-07-12 08:51:46.023725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:10.875 [2024-07-12 08:51:46.023908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:10.875 [2024-07-12 08:51:46.024296] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:25:10.875 [2024-07-12 08:51:46.024447] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:10.875 [2024-07-12 08:51:46.024768] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:10.875 [2024-07-12 08:51:46.025201] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:25:10.875 [2024-07-12 08:51:46.025328] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:25:10.875 [2024-07-12 08:51:46.025547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.875 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:10.875 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:10.875 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:10.875 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.876 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.135 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.135 "name": "raid_bdev1", 00:25:11.135 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:11.135 "strip_size_kb": 64, 00:25:11.135 "state": "online", 00:25:11.135 "raid_level": "raid0", 00:25:11.135 "superblock": true, 00:25:11.135 "num_base_bdevs": 4, 00:25:11.135 "num_base_bdevs_discovered": 4, 00:25:11.135 "num_base_bdevs_operational": 4, 00:25:11.135 "base_bdevs_list": [ 00:25:11.135 { 00:25:11.135 "name": "pt1", 00:25:11.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:11.135 "is_configured": true, 00:25:11.135 "data_offset": 2048, 00:25:11.135 "data_size": 63488 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "name": "pt2", 00:25:11.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:11.135 "is_configured": true, 00:25:11.135 "data_offset": 2048, 00:25:11.135 "data_size": 63488 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "name": "pt3", 00:25:11.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:11.135 "is_configured": true, 00:25:11.135 "data_offset": 2048, 00:25:11.135 "data_size": 63488 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "name": "pt4", 00:25:11.135 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:11.135 "is_configured": true, 00:25:11.135 "data_offset": 2048, 00:25:11.135 "data_size": 63488 00:25:11.135 } 00:25:11.135 ] 00:25:11.135 }' 00:25:11.135 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.135 08:51:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.701 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:11.701 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:11.701 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:11.701 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:11.701 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:11.701 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:11.959 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:11.959 08:51:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:11.959 [2024-07-12 08:51:47.086369] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.959 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:11.959 "name": "raid_bdev1", 00:25:11.959 "aliases": [ 00:25:11.959 "27ae5bae-fb5c-4482-9778-4add492e914e" 00:25:11.959 ], 00:25:11.959 "product_name": "Raid Volume", 00:25:11.959 "block_size": 512, 00:25:11.959 "num_blocks": 253952, 00:25:11.959 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:11.959 "assigned_rate_limits": { 00:25:11.959 "rw_ios_per_sec": 0, 00:25:11.959 "rw_mbytes_per_sec": 0, 00:25:11.959 "r_mbytes_per_sec": 0, 00:25:11.959 "w_mbytes_per_sec": 0 00:25:11.959 }, 00:25:11.959 "claimed": false, 00:25:11.959 "zoned": false, 00:25:11.959 "supported_io_types": { 00:25:11.959 "read": true, 00:25:11.959 "write": true, 00:25:11.959 "unmap": true, 00:25:11.959 "flush": true, 00:25:11.959 "reset": true, 00:25:11.959 "nvme_admin": false, 00:25:11.959 "nvme_io": false, 00:25:11.959 "nvme_io_md": false, 00:25:11.959 "write_zeroes": true, 00:25:11.959 "zcopy": false, 00:25:11.959 "get_zone_info": false, 00:25:11.959 "zone_management": false, 00:25:11.959 "zone_append": false, 00:25:11.959 "compare": false, 00:25:11.959 "compare_and_write": false, 00:25:11.959 "abort": false, 00:25:11.959 "seek_hole": false, 00:25:11.959 "seek_data": false, 00:25:11.959 "copy": false, 00:25:11.959 "nvme_iov_md": false 00:25:11.959 }, 00:25:11.959 "memory_domains": [ 00:25:11.959 { 00:25:11.959 "dma_device_id": "system", 00:25:11.959 "dma_device_type": 1 00:25:11.959 }, 00:25:11.960 { 00:25:11.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.960 "dma_device_type": 2 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "dma_device_id": "system", 00:25:11.960 "dma_device_type": 1 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.960 "dma_device_type": 2 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "dma_device_id": "system", 00:25:11.960 "dma_device_type": 1 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.960 "dma_device_type": 2 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "dma_device_id": "system", 00:25:11.960 "dma_device_type": 1 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.960 "dma_device_type": 2 00:25:11.960 } 00:25:11.960 ], 00:25:11.960 "driver_specific": { 00:25:11.960 "raid": { 00:25:11.960 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:11.960 "strip_size_kb": 64, 00:25:11.960 "state": "online", 00:25:11.960 "raid_level": "raid0", 00:25:11.960 "superblock": true, 00:25:11.960 "num_base_bdevs": 4, 00:25:11.960 "num_base_bdevs_discovered": 4, 00:25:11.960 "num_base_bdevs_operational": 4, 00:25:11.960 "base_bdevs_list": [ 00:25:11.960 { 00:25:11.960 "name": "pt1", 00:25:11.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:11.960 "is_configured": true, 00:25:11.960 "data_offset": 2048, 00:25:11.960 "data_size": 63488 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "name": "pt2", 00:25:11.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:11.960 "is_configured": true, 00:25:11.960 "data_offset": 2048, 00:25:11.960 "data_size": 63488 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "name": "pt3", 00:25:11.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:11.960 "is_configured": true, 00:25:11.960 "data_offset": 2048, 00:25:11.960 "data_size": 63488 00:25:11.960 }, 00:25:11.960 { 00:25:11.960 "name": "pt4", 00:25:11.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:11.960 "is_configured": true, 00:25:11.960 "data_offset": 2048, 00:25:11.960 "data_size": 63488 00:25:11.960 } 00:25:11.960 ] 00:25:11.960 } 00:25:11.960 } 00:25:11.960 }' 00:25:11.960 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.960 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:11.960 pt2 00:25:11.960 pt3 00:25:11.960 pt4' 00:25:11.960 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.960 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:11.960 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.218 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.218 "name": "pt1", 00:25:12.218 "aliases": [ 00:25:12.218 "00000000-0000-0000-0000-000000000001" 00:25:12.218 ], 00:25:12.218 "product_name": "passthru", 00:25:12.218 "block_size": 512, 00:25:12.218 "num_blocks": 65536, 00:25:12.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:12.218 "assigned_rate_limits": { 00:25:12.218 "rw_ios_per_sec": 0, 00:25:12.218 "rw_mbytes_per_sec": 0, 00:25:12.218 "r_mbytes_per_sec": 0, 00:25:12.218 "w_mbytes_per_sec": 0 00:25:12.218 }, 00:25:12.218 "claimed": true, 00:25:12.218 "claim_type": "exclusive_write", 00:25:12.218 "zoned": false, 00:25:12.218 "supported_io_types": { 00:25:12.218 "read": true, 00:25:12.218 "write": true, 00:25:12.218 "unmap": true, 00:25:12.218 "flush": true, 00:25:12.218 "reset": true, 00:25:12.218 "nvme_admin": false, 00:25:12.218 "nvme_io": false, 00:25:12.218 "nvme_io_md": false, 00:25:12.218 "write_zeroes": true, 00:25:12.218 "zcopy": true, 00:25:12.218 "get_zone_info": false, 00:25:12.218 "zone_management": false, 00:25:12.218 "zone_append": false, 00:25:12.218 "compare": false, 00:25:12.218 "compare_and_write": false, 00:25:12.218 "abort": true, 00:25:12.218 "seek_hole": false, 00:25:12.218 "seek_data": false, 00:25:12.218 "copy": true, 00:25:12.218 "nvme_iov_md": false 00:25:12.218 }, 00:25:12.218 "memory_domains": [ 00:25:12.218 { 00:25:12.218 "dma_device_id": "system", 00:25:12.218 "dma_device_type": 1 00:25:12.218 }, 00:25:12.218 { 00:25:12.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.218 "dma_device_type": 2 00:25:12.218 } 00:25:12.218 ], 00:25:12.218 "driver_specific": { 00:25:12.218 "passthru": { 00:25:12.218 "name": "pt1", 00:25:12.218 "base_bdev_name": "malloc1" 00:25:12.218 } 00:25:12.218 } 00:25:12.218 }' 00:25:12.218 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.218 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.477 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.477 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.477 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.477 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.477 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.477 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.735 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.735 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.735 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.735 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.735 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.735 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:12.735 08:51:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.994 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.994 "name": "pt2", 00:25:12.994 "aliases": [ 00:25:12.994 "00000000-0000-0000-0000-000000000002" 00:25:12.994 ], 00:25:12.994 "product_name": "passthru", 00:25:12.994 "block_size": 512, 00:25:12.994 "num_blocks": 65536, 00:25:12.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:12.994 "assigned_rate_limits": { 00:25:12.994 "rw_ios_per_sec": 0, 00:25:12.994 "rw_mbytes_per_sec": 0, 00:25:12.994 "r_mbytes_per_sec": 0, 00:25:12.994 "w_mbytes_per_sec": 0 00:25:12.994 }, 00:25:12.994 "claimed": true, 00:25:12.994 "claim_type": "exclusive_write", 00:25:12.994 "zoned": false, 00:25:12.994 "supported_io_types": { 00:25:12.994 "read": true, 00:25:12.994 "write": true, 00:25:12.994 "unmap": true, 00:25:12.994 "flush": true, 00:25:12.994 "reset": true, 00:25:12.994 "nvme_admin": false, 00:25:12.994 "nvme_io": false, 00:25:12.994 "nvme_io_md": false, 00:25:12.994 "write_zeroes": true, 00:25:12.994 "zcopy": true, 00:25:12.994 "get_zone_info": false, 00:25:12.994 "zone_management": false, 00:25:12.994 "zone_append": false, 00:25:12.994 "compare": false, 00:25:12.994 "compare_and_write": false, 00:25:12.994 "abort": true, 00:25:12.994 "seek_hole": false, 00:25:12.994 "seek_data": false, 00:25:12.994 "copy": true, 00:25:12.994 "nvme_iov_md": false 00:25:12.994 }, 00:25:12.994 "memory_domains": [ 00:25:12.994 { 00:25:12.994 "dma_device_id": "system", 00:25:12.994 "dma_device_type": 1 00:25:12.994 }, 00:25:12.994 { 00:25:12.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.994 "dma_device_type": 2 00:25:12.994 } 00:25:12.994 ], 00:25:12.994 "driver_specific": { 00:25:12.994 "passthru": { 00:25:12.994 "name": "pt2", 00:25:12.994 "base_bdev_name": "malloc2" 00:25:12.994 } 00:25:12.994 } 00:25:12.994 }' 00:25:12.994 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.994 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.252 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.510 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.510 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.510 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:13.510 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:13.510 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.768 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.768 "name": "pt3", 00:25:13.768 "aliases": [ 00:25:13.768 "00000000-0000-0000-0000-000000000003" 00:25:13.768 ], 00:25:13.768 "product_name": "passthru", 00:25:13.768 "block_size": 512, 00:25:13.768 "num_blocks": 65536, 00:25:13.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:13.768 "assigned_rate_limits": { 00:25:13.768 "rw_ios_per_sec": 0, 00:25:13.768 "rw_mbytes_per_sec": 0, 00:25:13.768 "r_mbytes_per_sec": 0, 00:25:13.768 "w_mbytes_per_sec": 0 00:25:13.768 }, 00:25:13.768 "claimed": true, 00:25:13.768 "claim_type": "exclusive_write", 00:25:13.768 "zoned": false, 00:25:13.768 "supported_io_types": { 00:25:13.768 "read": true, 00:25:13.768 "write": true, 00:25:13.768 "unmap": true, 00:25:13.768 "flush": true, 00:25:13.768 "reset": true, 00:25:13.768 "nvme_admin": false, 00:25:13.768 "nvme_io": false, 00:25:13.768 "nvme_io_md": false, 00:25:13.768 "write_zeroes": true, 00:25:13.768 "zcopy": true, 00:25:13.768 "get_zone_info": false, 00:25:13.768 "zone_management": false, 00:25:13.768 "zone_append": false, 00:25:13.768 "compare": false, 00:25:13.768 "compare_and_write": false, 00:25:13.768 "abort": true, 00:25:13.768 "seek_hole": false, 00:25:13.768 "seek_data": false, 00:25:13.768 "copy": true, 00:25:13.768 "nvme_iov_md": false 00:25:13.768 }, 00:25:13.768 "memory_domains": [ 00:25:13.768 { 00:25:13.768 "dma_device_id": "system", 00:25:13.768 "dma_device_type": 1 00:25:13.768 }, 00:25:13.768 { 00:25:13.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.768 "dma_device_type": 2 00:25:13.768 } 00:25:13.768 ], 00:25:13.768 "driver_specific": { 00:25:13.768 "passthru": { 00:25:13.768 "name": "pt3", 00:25:13.768 "base_bdev_name": "malloc3" 00:25:13.768 } 00:25:13.768 } 00:25:13.768 }' 00:25:13.768 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.768 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.768 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.768 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.026 08:51:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.026 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:14.026 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.026 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.026 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.026 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.026 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.284 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:14.284 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:14.284 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:14.284 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:14.543 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:14.543 "name": "pt4", 00:25:14.543 "aliases": [ 00:25:14.543 "00000000-0000-0000-0000-000000000004" 00:25:14.543 ], 00:25:14.543 "product_name": "passthru", 00:25:14.543 "block_size": 512, 00:25:14.543 "num_blocks": 65536, 00:25:14.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:14.543 "assigned_rate_limits": { 00:25:14.543 "rw_ios_per_sec": 0, 00:25:14.543 "rw_mbytes_per_sec": 0, 00:25:14.543 "r_mbytes_per_sec": 0, 00:25:14.543 "w_mbytes_per_sec": 0 00:25:14.543 }, 00:25:14.543 "claimed": true, 00:25:14.543 "claim_type": "exclusive_write", 00:25:14.543 "zoned": false, 00:25:14.543 "supported_io_types": { 00:25:14.543 "read": true, 00:25:14.543 "write": true, 00:25:14.543 "unmap": true, 00:25:14.543 "flush": true, 00:25:14.543 "reset": true, 00:25:14.543 "nvme_admin": false, 00:25:14.543 "nvme_io": false, 00:25:14.543 "nvme_io_md": false, 00:25:14.543 "write_zeroes": true, 00:25:14.543 "zcopy": true, 00:25:14.543 "get_zone_info": false, 00:25:14.543 "zone_management": false, 00:25:14.543 "zone_append": false, 00:25:14.543 "compare": false, 00:25:14.543 "compare_and_write": false, 00:25:14.543 "abort": true, 00:25:14.543 "seek_hole": false, 00:25:14.543 "seek_data": false, 00:25:14.543 "copy": true, 00:25:14.543 "nvme_iov_md": false 00:25:14.543 }, 00:25:14.543 "memory_domains": [ 00:25:14.543 { 00:25:14.543 "dma_device_id": "system", 00:25:14.543 "dma_device_type": 1 00:25:14.543 }, 00:25:14.543 { 00:25:14.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.543 "dma_device_type": 2 00:25:14.543 } 00:25:14.543 ], 00:25:14.543 "driver_specific": { 00:25:14.543 "passthru": { 00:25:14.543 "name": "pt4", 00:25:14.543 "base_bdev_name": "malloc4" 00:25:14.543 } 00:25:14.543 } 00:25:14.543 }' 00:25:14.543 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.543 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.543 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:14.543 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.543 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.801 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:14.801 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.801 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.801 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.801 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.801 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:15.059 08:51:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:15.059 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:15.059 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:15.059 [2024-07-12 08:51:50.247116] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:15.318 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=27ae5bae-fb5c-4482-9778-4add492e914e 00:25:15.318 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 27ae5bae-fb5c-4482-9778-4add492e914e ']' 00:25:15.318 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:15.318 [2024-07-12 08:51:50.454852] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.318 [2024-07-12 08:51:50.455035] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.318 [2024-07-12 08:51:50.455223] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.318 [2024-07-12 08:51:50.455423] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.318 [2024-07-12 08:51:50.455553] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:25:15.318 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.318 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:15.577 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:15.577 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:15.577 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:15.577 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:15.836 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:15.836 08:51:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:16.094 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.094 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:16.351 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.352 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:16.609 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:16.610 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.867 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.868 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:16.868 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:16.868 08:51:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:17.126 [2024-07-12 08:51:52.083209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:17.126 [2024-07-12 08:51:52.088809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:17.126 [2024-07-12 08:51:52.089202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:17.126 [2024-07-12 08:51:52.089467] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:17.126 [2024-07-12 08:51:52.089741] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:17.126 [2024-07-12 08:51:52.090802] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:17.126 [2024-07-12 08:51:52.091293] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:17.126 [2024-07-12 08:51:52.091765] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:17.126 [2024-07-12 08:51:52.092169] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:17.126 [2024-07-12 08:51:52.092411] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:25:17.126 request: 00:25:17.126 { 00:25:17.126 "name": "raid_bdev1", 00:25:17.126 "raid_level": "raid0", 00:25:17.126 "base_bdevs": [ 00:25:17.126 "malloc1", 00:25:17.126 "malloc2", 00:25:17.126 "malloc3", 00:25:17.126 "malloc4" 00:25:17.126 ], 00:25:17.126 "strip_size_kb": 64, 00:25:17.126 "superblock": false, 00:25:17.127 "method": "bdev_raid_create", 00:25:17.127 "req_id": 1 00:25:17.127 } 00:25:17.127 Got JSON-RPC error response 00:25:17.127 response: 00:25:17.127 { 00:25:17.127 "code": -17, 00:25:17.127 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:17.127 } 00:25:17.127 08:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:17.127 08:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:17.127 08:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:17.127 08:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:17.127 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.127 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:17.384 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:17.384 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:17.384 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:17.384 [2024-07-12 08:51:52.573916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:17.384 [2024-07-12 08:51:52.574312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.384 [2024-07-12 08:51:52.574575] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:17.384 [2024-07-12 08:51:52.574841] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.384 [2024-07-12 08:51:52.577343] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.384 [2024-07-12 08:51:52.577608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:17.384 [2024-07-12 08:51:52.577943] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:17.384 [2024-07-12 08:51:52.578128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:17.642 pt1 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.642 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.905 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.905 "name": "raid_bdev1", 00:25:17.905 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:17.905 "strip_size_kb": 64, 00:25:17.905 "state": "configuring", 00:25:17.905 "raid_level": "raid0", 00:25:17.905 "superblock": true, 00:25:17.905 "num_base_bdevs": 4, 00:25:17.905 "num_base_bdevs_discovered": 1, 00:25:17.905 "num_base_bdevs_operational": 4, 00:25:17.905 "base_bdevs_list": [ 00:25:17.905 { 00:25:17.905 "name": "pt1", 00:25:17.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:17.905 "is_configured": true, 00:25:17.905 "data_offset": 2048, 00:25:17.905 "data_size": 63488 00:25:17.905 }, 00:25:17.905 { 00:25:17.905 "name": null, 00:25:17.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:17.905 "is_configured": false, 00:25:17.905 "data_offset": 2048, 00:25:17.905 "data_size": 63488 00:25:17.905 }, 00:25:17.905 { 00:25:17.905 "name": null, 00:25:17.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:17.905 "is_configured": false, 00:25:17.905 "data_offset": 2048, 00:25:17.905 "data_size": 63488 00:25:17.905 }, 00:25:17.905 { 00:25:17.905 "name": null, 00:25:17.905 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:17.905 "is_configured": false, 00:25:17.905 "data_offset": 2048, 00:25:17.905 "data_size": 63488 00:25:17.905 } 00:25:17.905 ] 00:25:17.905 }' 00:25:17.905 08:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.905 08:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 08:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:18.474 08:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:18.733 [2024-07-12 08:51:53.778367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:18.733 [2024-07-12 08:51:53.778670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.733 [2024-07-12 08:51:53.778836] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:18.733 [2024-07-12 08:51:53.778983] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.733 [2024-07-12 08:51:53.779565] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.733 [2024-07-12 08:51:53.779759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:18.733 [2024-07-12 08:51:53.779981] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:18.733 [2024-07-12 08:51:53.780111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:18.733 pt2 00:25:18.733 08:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:18.990 [2024-07-12 08:51:54.018485] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.991 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.249 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.249 "name": "raid_bdev1", 00:25:19.249 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:19.249 "strip_size_kb": 64, 00:25:19.249 "state": "configuring", 00:25:19.249 "raid_level": "raid0", 00:25:19.249 "superblock": true, 00:25:19.249 "num_base_bdevs": 4, 00:25:19.249 "num_base_bdevs_discovered": 1, 00:25:19.249 "num_base_bdevs_operational": 4, 00:25:19.249 "base_bdevs_list": [ 00:25:19.249 { 00:25:19.249 "name": "pt1", 00:25:19.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:19.249 "is_configured": true, 00:25:19.249 "data_offset": 2048, 00:25:19.249 "data_size": 63488 00:25:19.249 }, 00:25:19.249 { 00:25:19.249 "name": null, 00:25:19.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:19.249 "is_configured": false, 00:25:19.249 "data_offset": 2048, 00:25:19.249 "data_size": 63488 00:25:19.249 }, 00:25:19.249 { 00:25:19.249 "name": null, 00:25:19.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:19.249 "is_configured": false, 00:25:19.249 "data_offset": 2048, 00:25:19.249 "data_size": 63488 00:25:19.249 }, 00:25:19.249 { 00:25:19.249 "name": null, 00:25:19.249 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:19.249 "is_configured": false, 00:25:19.249 "data_offset": 2048, 00:25:19.249 "data_size": 63488 00:25:19.249 } 00:25:19.249 ] 00:25:19.249 }' 00:25:19.249 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.249 08:51:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.815 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:19.815 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:19.815 08:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:20.073 [2024-07-12 08:51:55.246789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:20.073 [2024-07-12 08:51:55.247047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.073 [2024-07-12 08:51:55.247125] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:20.073 [2024-07-12 08:51:55.247385] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.073 [2024-07-12 08:51:55.248069] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.073 [2024-07-12 08:51:55.248279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:20.073 [2024-07-12 08:51:55.248510] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:20.073 [2024-07-12 08:51:55.248667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:20.073 pt2 00:25:20.073 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:20.073 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:20.073 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:20.331 [2024-07-12 08:51:55.518858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:20.331 [2024-07-12 08:51:55.519129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.331 [2024-07-12 08:51:55.519291] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:20.331 [2024-07-12 08:51:55.519445] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.331 [2024-07-12 08:51:55.519982] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.331 [2024-07-12 08:51:55.520142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:20.331 [2024-07-12 08:51:55.520355] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:20.331 [2024-07-12 08:51:55.520484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:20.331 pt3 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:20.590 [2024-07-12 08:51:55.746935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:20.590 [2024-07-12 08:51:55.747179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.590 [2024-07-12 08:51:55.747256] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:20.590 [2024-07-12 08:51:55.747535] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.590 [2024-07-12 08:51:55.748148] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.590 [2024-07-12 08:51:55.748325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:20.590 [2024-07-12 08:51:55.748561] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:20.590 [2024-07-12 08:51:55.748712] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:20.590 [2024-07-12 08:51:55.748971] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:25:20.590 [2024-07-12 08:51:55.749092] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:20.590 [2024-07-12 08:51:55.749237] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:20.590 [2024-07-12 08:51:55.749629] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:25:20.590 [2024-07-12 08:51:55.749758] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:25:20.590 [2024-07-12 08:51:55.750003] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.590 pt4 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.590 08:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.849 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.849 "name": "raid_bdev1", 00:25:20.849 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:20.849 "strip_size_kb": 64, 00:25:20.849 "state": "online", 00:25:20.849 "raid_level": "raid0", 00:25:20.849 "superblock": true, 00:25:20.849 "num_base_bdevs": 4, 00:25:20.849 "num_base_bdevs_discovered": 4, 00:25:20.849 "num_base_bdevs_operational": 4, 00:25:20.849 "base_bdevs_list": [ 00:25:20.849 { 00:25:20.849 "name": "pt1", 00:25:20.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:20.849 "is_configured": true, 00:25:20.849 "data_offset": 2048, 00:25:20.849 "data_size": 63488 00:25:20.849 }, 00:25:20.849 { 00:25:20.849 "name": "pt2", 00:25:20.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:20.849 "is_configured": true, 00:25:20.849 "data_offset": 2048, 00:25:20.849 "data_size": 63488 00:25:20.849 }, 00:25:20.849 { 00:25:20.849 "name": "pt3", 00:25:20.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:20.849 "is_configured": true, 00:25:20.849 "data_offset": 2048, 00:25:20.849 "data_size": 63488 00:25:20.849 }, 00:25:20.849 { 00:25:20.849 "name": "pt4", 00:25:20.849 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:20.849 "is_configured": true, 00:25:20.849 "data_offset": 2048, 00:25:20.849 "data_size": 63488 00:25:20.849 } 00:25:20.849 ] 00:25:20.849 }' 00:25:20.849 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.849 08:51:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:21.785 [2024-07-12 08:51:56.951635] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:21.785 "name": "raid_bdev1", 00:25:21.785 "aliases": [ 00:25:21.785 "27ae5bae-fb5c-4482-9778-4add492e914e" 00:25:21.785 ], 00:25:21.785 "product_name": "Raid Volume", 00:25:21.785 "block_size": 512, 00:25:21.785 "num_blocks": 253952, 00:25:21.785 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:21.785 "assigned_rate_limits": { 00:25:21.785 "rw_ios_per_sec": 0, 00:25:21.785 "rw_mbytes_per_sec": 0, 00:25:21.785 "r_mbytes_per_sec": 0, 00:25:21.785 "w_mbytes_per_sec": 0 00:25:21.785 }, 00:25:21.785 "claimed": false, 00:25:21.785 "zoned": false, 00:25:21.785 "supported_io_types": { 00:25:21.785 "read": true, 00:25:21.785 "write": true, 00:25:21.785 "unmap": true, 00:25:21.785 "flush": true, 00:25:21.785 "reset": true, 00:25:21.785 "nvme_admin": false, 00:25:21.785 "nvme_io": false, 00:25:21.785 "nvme_io_md": false, 00:25:21.785 "write_zeroes": true, 00:25:21.785 "zcopy": false, 00:25:21.785 "get_zone_info": false, 00:25:21.785 "zone_management": false, 00:25:21.785 "zone_append": false, 00:25:21.785 "compare": false, 00:25:21.785 "compare_and_write": false, 00:25:21.785 "abort": false, 00:25:21.785 "seek_hole": false, 00:25:21.785 "seek_data": false, 00:25:21.785 "copy": false, 00:25:21.785 "nvme_iov_md": false 00:25:21.785 }, 00:25:21.785 "memory_domains": [ 00:25:21.785 { 00:25:21.785 "dma_device_id": "system", 00:25:21.785 "dma_device_type": 1 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.785 "dma_device_type": 2 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "dma_device_id": "system", 00:25:21.785 "dma_device_type": 1 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.785 "dma_device_type": 2 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "dma_device_id": "system", 00:25:21.785 "dma_device_type": 1 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.785 "dma_device_type": 2 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "dma_device_id": "system", 00:25:21.785 "dma_device_type": 1 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.785 "dma_device_type": 2 00:25:21.785 } 00:25:21.785 ], 00:25:21.785 "driver_specific": { 00:25:21.785 "raid": { 00:25:21.785 "uuid": "27ae5bae-fb5c-4482-9778-4add492e914e", 00:25:21.785 "strip_size_kb": 64, 00:25:21.785 "state": "online", 00:25:21.785 "raid_level": "raid0", 00:25:21.785 "superblock": true, 00:25:21.785 "num_base_bdevs": 4, 00:25:21.785 "num_base_bdevs_discovered": 4, 00:25:21.785 "num_base_bdevs_operational": 4, 00:25:21.785 "base_bdevs_list": [ 00:25:21.785 { 00:25:21.785 "name": "pt1", 00:25:21.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:21.785 "is_configured": true, 00:25:21.785 "data_offset": 2048, 00:25:21.785 "data_size": 63488 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "name": "pt2", 00:25:21.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:21.785 "is_configured": true, 00:25:21.785 "data_offset": 2048, 00:25:21.785 "data_size": 63488 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "name": "pt3", 00:25:21.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:21.785 "is_configured": true, 00:25:21.785 "data_offset": 2048, 00:25:21.785 "data_size": 63488 00:25:21.785 }, 00:25:21.785 { 00:25:21.785 "name": "pt4", 00:25:21.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:21.785 "is_configured": true, 00:25:21.785 "data_offset": 2048, 00:25:21.785 "data_size": 63488 00:25:21.785 } 00:25:21.785 ] 00:25:21.785 } 00:25:21.785 } 00:25:21.785 }' 00:25:21.785 08:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:22.043 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:22.043 pt2 00:25:22.043 pt3 00:25:22.043 pt4' 00:25:22.043 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:22.043 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:22.043 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:22.303 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:22.303 "name": "pt1", 00:25:22.303 "aliases": [ 00:25:22.303 "00000000-0000-0000-0000-000000000001" 00:25:22.303 ], 00:25:22.303 "product_name": "passthru", 00:25:22.303 "block_size": 512, 00:25:22.303 "num_blocks": 65536, 00:25:22.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:22.303 "assigned_rate_limits": { 00:25:22.303 "rw_ios_per_sec": 0, 00:25:22.303 "rw_mbytes_per_sec": 0, 00:25:22.303 "r_mbytes_per_sec": 0, 00:25:22.303 "w_mbytes_per_sec": 0 00:25:22.303 }, 00:25:22.303 "claimed": true, 00:25:22.303 "claim_type": "exclusive_write", 00:25:22.303 "zoned": false, 00:25:22.303 "supported_io_types": { 00:25:22.303 "read": true, 00:25:22.303 "write": true, 00:25:22.303 "unmap": true, 00:25:22.303 "flush": true, 00:25:22.303 "reset": true, 00:25:22.303 "nvme_admin": false, 00:25:22.303 "nvme_io": false, 00:25:22.303 "nvme_io_md": false, 00:25:22.303 "write_zeroes": true, 00:25:22.303 "zcopy": true, 00:25:22.303 "get_zone_info": false, 00:25:22.303 "zone_management": false, 00:25:22.303 "zone_append": false, 00:25:22.303 "compare": false, 00:25:22.303 "compare_and_write": false, 00:25:22.303 "abort": true, 00:25:22.303 "seek_hole": false, 00:25:22.303 "seek_data": false, 00:25:22.303 "copy": true, 00:25:22.303 "nvme_iov_md": false 00:25:22.303 }, 00:25:22.303 "memory_domains": [ 00:25:22.303 { 00:25:22.303 "dma_device_id": "system", 00:25:22.303 "dma_device_type": 1 00:25:22.303 }, 00:25:22.303 { 00:25:22.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.303 "dma_device_type": 2 00:25:22.303 } 00:25:22.303 ], 00:25:22.303 "driver_specific": { 00:25:22.303 "passthru": { 00:25:22.303 "name": "pt1", 00:25:22.303 "base_bdev_name": "malloc1" 00:25:22.303 } 00:25:22.303 } 00:25:22.303 }' 00:25:22.303 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.303 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:22.303 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:22.303 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.303 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:22.562 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:22.821 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:22.821 "name": "pt2", 00:25:22.821 "aliases": [ 00:25:22.821 "00000000-0000-0000-0000-000000000002" 00:25:22.821 ], 00:25:22.821 "product_name": "passthru", 00:25:22.821 "block_size": 512, 00:25:22.821 "num_blocks": 65536, 00:25:22.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:22.821 "assigned_rate_limits": { 00:25:22.821 "rw_ios_per_sec": 0, 00:25:22.821 "rw_mbytes_per_sec": 0, 00:25:22.821 "r_mbytes_per_sec": 0, 00:25:22.821 "w_mbytes_per_sec": 0 00:25:22.821 }, 00:25:22.821 "claimed": true, 00:25:22.821 "claim_type": "exclusive_write", 00:25:22.821 "zoned": false, 00:25:22.821 "supported_io_types": { 00:25:22.821 "read": true, 00:25:22.821 "write": true, 00:25:22.821 "unmap": true, 00:25:22.821 "flush": true, 00:25:22.821 "reset": true, 00:25:22.821 "nvme_admin": false, 00:25:22.821 "nvme_io": false, 00:25:22.821 "nvme_io_md": false, 00:25:22.821 "write_zeroes": true, 00:25:22.821 "zcopy": true, 00:25:22.821 "get_zone_info": false, 00:25:22.821 "zone_management": false, 00:25:22.821 "zone_append": false, 00:25:22.821 "compare": false, 00:25:22.821 "compare_and_write": false, 00:25:22.821 "abort": true, 00:25:22.821 "seek_hole": false, 00:25:22.821 "seek_data": false, 00:25:22.821 "copy": true, 00:25:22.821 "nvme_iov_md": false 00:25:22.821 }, 00:25:22.821 "memory_domains": [ 00:25:22.821 { 00:25:22.821 "dma_device_id": "system", 00:25:22.821 "dma_device_type": 1 00:25:22.821 }, 00:25:22.821 { 00:25:22.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.821 "dma_device_type": 2 00:25:22.821 } 00:25:22.821 ], 00:25:22.821 "driver_specific": { 00:25:22.821 "passthru": { 00:25:22.821 "name": "pt2", 00:25:22.821 "base_bdev_name": "malloc2" 00:25:22.821 } 00:25:22.821 } 00:25:22.821 }' 00:25:22.821 08:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:23.079 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:23.079 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:23.079 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:23.079 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:23.079 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:23.079 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:23.079 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:23.337 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:23.337 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.337 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.337 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:23.337 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:23.337 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:23.337 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:23.614 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:23.614 "name": "pt3", 00:25:23.614 "aliases": [ 00:25:23.614 "00000000-0000-0000-0000-000000000003" 00:25:23.614 ], 00:25:23.614 "product_name": "passthru", 00:25:23.614 "block_size": 512, 00:25:23.614 "num_blocks": 65536, 00:25:23.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:23.614 "assigned_rate_limits": { 00:25:23.614 "rw_ios_per_sec": 0, 00:25:23.614 "rw_mbytes_per_sec": 0, 00:25:23.614 "r_mbytes_per_sec": 0, 00:25:23.614 "w_mbytes_per_sec": 0 00:25:23.614 }, 00:25:23.614 "claimed": true, 00:25:23.614 "claim_type": "exclusive_write", 00:25:23.614 "zoned": false, 00:25:23.614 "supported_io_types": { 00:25:23.614 "read": true, 00:25:23.614 "write": true, 00:25:23.614 "unmap": true, 00:25:23.614 "flush": true, 00:25:23.614 "reset": true, 00:25:23.614 "nvme_admin": false, 00:25:23.614 "nvme_io": false, 00:25:23.614 "nvme_io_md": false, 00:25:23.614 "write_zeroes": true, 00:25:23.614 "zcopy": true, 00:25:23.614 "get_zone_info": false, 00:25:23.614 "zone_management": false, 00:25:23.614 "zone_append": false, 00:25:23.614 "compare": false, 00:25:23.614 "compare_and_write": false, 00:25:23.614 "abort": true, 00:25:23.614 "seek_hole": false, 00:25:23.614 "seek_data": false, 00:25:23.614 "copy": true, 00:25:23.614 "nvme_iov_md": false 00:25:23.614 }, 00:25:23.614 "memory_domains": [ 00:25:23.614 { 00:25:23.614 "dma_device_id": "system", 00:25:23.614 "dma_device_type": 1 00:25:23.614 }, 00:25:23.614 { 00:25:23.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.614 "dma_device_type": 2 00:25:23.614 } 00:25:23.614 ], 00:25:23.614 "driver_specific": { 00:25:23.614 "passthru": { 00:25:23.614 "name": "pt3", 00:25:23.614 "base_bdev_name": "malloc3" 00:25:23.614 } 00:25:23.614 } 00:25:23.614 }' 00:25:23.614 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:23.614 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:23.614 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:23.614 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:23.923 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:23.923 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:23.923 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:23.923 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:23.923 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:23.923 08:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.923 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.923 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:23.923 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:23.924 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:23.924 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:24.191 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:24.191 "name": "pt4", 00:25:24.191 "aliases": [ 00:25:24.191 "00000000-0000-0000-0000-000000000004" 00:25:24.191 ], 00:25:24.191 "product_name": "passthru", 00:25:24.191 "block_size": 512, 00:25:24.191 "num_blocks": 65536, 00:25:24.191 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:24.191 "assigned_rate_limits": { 00:25:24.191 "rw_ios_per_sec": 0, 00:25:24.191 "rw_mbytes_per_sec": 0, 00:25:24.191 "r_mbytes_per_sec": 0, 00:25:24.191 "w_mbytes_per_sec": 0 00:25:24.191 }, 00:25:24.191 "claimed": true, 00:25:24.191 "claim_type": "exclusive_write", 00:25:24.191 "zoned": false, 00:25:24.191 "supported_io_types": { 00:25:24.191 "read": true, 00:25:24.191 "write": true, 00:25:24.191 "unmap": true, 00:25:24.192 "flush": true, 00:25:24.192 "reset": true, 00:25:24.192 "nvme_admin": false, 00:25:24.192 "nvme_io": false, 00:25:24.192 "nvme_io_md": false, 00:25:24.192 "write_zeroes": true, 00:25:24.192 "zcopy": true, 00:25:24.192 "get_zone_info": false, 00:25:24.192 "zone_management": false, 00:25:24.192 "zone_append": false, 00:25:24.192 "compare": false, 00:25:24.192 "compare_and_write": false, 00:25:24.192 "abort": true, 00:25:24.192 "seek_hole": false, 00:25:24.192 "seek_data": false, 00:25:24.192 "copy": true, 00:25:24.192 "nvme_iov_md": false 00:25:24.192 }, 00:25:24.192 "memory_domains": [ 00:25:24.192 { 00:25:24.192 "dma_device_id": "system", 00:25:24.192 "dma_device_type": 1 00:25:24.192 }, 00:25:24.192 { 00:25:24.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.192 "dma_device_type": 2 00:25:24.192 } 00:25:24.192 ], 00:25:24.192 "driver_specific": { 00:25:24.192 "passthru": { 00:25:24.192 "name": "pt4", 00:25:24.192 "base_bdev_name": "malloc4" 00:25:24.192 } 00:25:24.192 } 00:25:24.192 }' 00:25:24.192 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.192 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.450 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:24.451 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.451 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.451 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:24.451 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.451 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.451 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:24.451 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.709 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.709 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:24.709 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:24.709 08:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:24.967 [2024-07-12 08:52:00.000311] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 27ae5bae-fb5c-4482-9778-4add492e914e '!=' 27ae5bae-fb5c-4482-9778-4add492e914e ']' 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 137972 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 137972 ']' 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 137972 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137972 00:25:24.967 killing process with pid 137972 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:24.967 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:24.968 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137972' 00:25:24.968 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 137972 00:25:24.968 08:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 137972 00:25:24.968 [2024-07-12 08:52:00.034390] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:24.968 [2024-07-12 08:52:00.034482] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:24.968 [2024-07-12 08:52:00.034555] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:24.968 [2024-07-12 08:52:00.034602] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:25:25.225 [2024-07-12 08:52:00.345955] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:26.599 ************************************ 00:25:26.599 END TEST raid_superblock_test 00:25:26.599 ************************************ 00:25:26.599 08:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:25:26.599 00:25:26.599 real 0m18.508s 00:25:26.599 user 0m34.109s 00:25:26.599 sys 0m1.705s 00:25:26.599 08:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:26.599 08:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.599 08:52:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:26.599 08:52:01 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:25:26.599 08:52:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:26.599 08:52:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.599 08:52:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:26.599 ************************************ 00:25:26.599 START TEST raid_read_error_test 00:25:26.599 ************************************ 00:25:26.599 08:52:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:25:26.599 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:25:26.599 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8ZfHOLgcQQ 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138540 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138540 /var/tmp/spdk-raid.sock 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 138540 ']' 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.600 08:52:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.600 [2024-07-12 08:52:01.568092] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:25:26.600 [2024-07-12 08:52:01.569155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138540 ] 00:25:26.600 [2024-07-12 08:52:01.741617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.858 [2024-07-12 08:52:01.973050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.116 [2024-07-12 08:52:02.157522] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:27.374 08:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.374 08:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:27.374 08:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:27.374 08:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:27.632 BaseBdev1_malloc 00:25:27.632 08:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:27.890 true 00:25:27.890 08:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:28.149 [2024-07-12 08:52:03.154696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:28.149 [2024-07-12 08:52:03.154977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.149 [2024-07-12 08:52:03.155121] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:28.149 [2024-07-12 08:52:03.155228] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.149 [2024-07-12 08:52:03.157764] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.149 [2024-07-12 08:52:03.157937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:28.149 BaseBdev1 00:25:28.149 08:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:28.149 08:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:28.407 BaseBdev2_malloc 00:25:28.407 08:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:28.665 true 00:25:28.665 08:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:28.665 [2024-07-12 08:52:03.837313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:28.665 [2024-07-12 08:52:03.837602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.665 [2024-07-12 08:52:03.837744] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:28.666 [2024-07-12 08:52:03.837857] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.666 [2024-07-12 08:52:03.839939] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.666 [2024-07-12 08:52:03.840102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:28.666 BaseBdev2 00:25:28.666 08:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:28.666 08:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:29.236 BaseBdev3_malloc 00:25:29.236 08:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:29.236 true 00:25:29.236 08:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:29.495 [2024-07-12 08:52:04.535422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:29.495 [2024-07-12 08:52:04.535682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.495 [2024-07-12 08:52:04.535821] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:29.495 [2024-07-12 08:52:04.535937] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.495 [2024-07-12 08:52:04.538024] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.495 [2024-07-12 08:52:04.538197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:29.495 BaseBdev3 00:25:29.495 08:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:29.495 08:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:29.753 BaseBdev4_malloc 00:25:29.753 08:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:30.010 true 00:25:30.010 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:30.267 [2024-07-12 08:52:05.211473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:30.267 [2024-07-12 08:52:05.211742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:30.267 [2024-07-12 08:52:05.211810] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:30.267 [2024-07-12 08:52:05.212056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:30.267 [2024-07-12 08:52:05.214200] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:30.267 [2024-07-12 08:52:05.214390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:30.267 BaseBdev4 00:25:30.267 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:30.267 [2024-07-12 08:52:05.431573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:30.267 [2024-07-12 08:52:05.433511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.267 [2024-07-12 08:52:05.433765] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:30.267 [2024-07-12 08:52:05.434030] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:30.267 [2024-07-12 08:52:05.434453] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:30.267 [2024-07-12 08:52:05.434573] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:30.267 [2024-07-12 08:52:05.434743] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:30.267 [2024-07-12 08:52:05.435171] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:30.267 [2024-07-12 08:52:05.435292] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:30.268 [2024-07-12 08:52:05.435587] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.268 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.524 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.524 "name": "raid_bdev1", 00:25:30.524 "uuid": "686395e8-43dd-4ff5-be51-abcc6e07ddb5", 00:25:30.524 "strip_size_kb": 64, 00:25:30.524 "state": "online", 00:25:30.524 "raid_level": "raid0", 00:25:30.524 "superblock": true, 00:25:30.524 "num_base_bdevs": 4, 00:25:30.524 "num_base_bdevs_discovered": 4, 00:25:30.524 "num_base_bdevs_operational": 4, 00:25:30.524 "base_bdevs_list": [ 00:25:30.524 { 00:25:30.524 "name": "BaseBdev1", 00:25:30.524 "uuid": "0c29edad-5e0c-5f7d-afa8-7b68f137a59a", 00:25:30.524 "is_configured": true, 00:25:30.524 "data_offset": 2048, 00:25:30.524 "data_size": 63488 00:25:30.524 }, 00:25:30.524 { 00:25:30.524 "name": "BaseBdev2", 00:25:30.524 "uuid": "6e43fd22-8cde-54e5-8326-7f50153d8d37", 00:25:30.524 "is_configured": true, 00:25:30.524 "data_offset": 2048, 00:25:30.524 "data_size": 63488 00:25:30.524 }, 00:25:30.524 { 00:25:30.524 "name": "BaseBdev3", 00:25:30.524 "uuid": "24aea784-cd2e-55b0-a2cf-e5e58833416f", 00:25:30.524 "is_configured": true, 00:25:30.524 "data_offset": 2048, 00:25:30.524 "data_size": 63488 00:25:30.524 }, 00:25:30.524 { 00:25:30.524 "name": "BaseBdev4", 00:25:30.524 "uuid": "c0283b8c-2b50-5983-b66d-1097d2f481e1", 00:25:30.524 "is_configured": true, 00:25:30.524 "data_offset": 2048, 00:25:30.524 "data_size": 63488 00:25:30.524 } 00:25:30.524 ] 00:25:30.524 }' 00:25:30.524 08:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.524 08:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.456 08:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:31.456 08:52:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:31.456 [2024-07-12 08:52:06.461276] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:32.390 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.648 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.906 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.906 "name": "raid_bdev1", 00:25:32.906 "uuid": "686395e8-43dd-4ff5-be51-abcc6e07ddb5", 00:25:32.906 "strip_size_kb": 64, 00:25:32.906 "state": "online", 00:25:32.906 "raid_level": "raid0", 00:25:32.906 "superblock": true, 00:25:32.906 "num_base_bdevs": 4, 00:25:32.906 "num_base_bdevs_discovered": 4, 00:25:32.906 "num_base_bdevs_operational": 4, 00:25:32.906 "base_bdevs_list": [ 00:25:32.906 { 00:25:32.906 "name": "BaseBdev1", 00:25:32.906 "uuid": "0c29edad-5e0c-5f7d-afa8-7b68f137a59a", 00:25:32.906 "is_configured": true, 00:25:32.906 "data_offset": 2048, 00:25:32.906 "data_size": 63488 00:25:32.906 }, 00:25:32.906 { 00:25:32.906 "name": "BaseBdev2", 00:25:32.906 "uuid": "6e43fd22-8cde-54e5-8326-7f50153d8d37", 00:25:32.906 "is_configured": true, 00:25:32.906 "data_offset": 2048, 00:25:32.906 "data_size": 63488 00:25:32.906 }, 00:25:32.906 { 00:25:32.906 "name": "BaseBdev3", 00:25:32.906 "uuid": "24aea784-cd2e-55b0-a2cf-e5e58833416f", 00:25:32.906 "is_configured": true, 00:25:32.906 "data_offset": 2048, 00:25:32.906 "data_size": 63488 00:25:32.906 }, 00:25:32.906 { 00:25:32.906 "name": "BaseBdev4", 00:25:32.906 "uuid": "c0283b8c-2b50-5983-b66d-1097d2f481e1", 00:25:32.906 "is_configured": true, 00:25:32.906 "data_offset": 2048, 00:25:32.906 "data_size": 63488 00:25:32.906 } 00:25:32.906 ] 00:25:32.906 }' 00:25:32.906 08:52:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.906 08:52:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.472 08:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:33.730 [2024-07-12 08:52:08.843494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:33.730 [2024-07-12 08:52:08.843816] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:33.730 [2024-07-12 08:52:08.846748] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.730 [2024-07-12 08:52:08.846947] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.730 [2024-07-12 08:52:08.847030] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.730 [2024-07-12 08:52:08.847252] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:33.730 0 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138540 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 138540 ']' 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 138540 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138540 00:25:33.730 killing process with pid 138540 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138540' 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 138540 00:25:33.730 08:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 138540 00:25:33.730 [2024-07-12 08:52:08.878282] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.988 [2024-07-12 08:52:09.113104] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8ZfHOLgcQQ 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:35.358 ************************************ 00:25:35.358 END TEST raid_read_error_test 00:25:35.358 ************************************ 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:25:35.358 00:25:35.358 real 0m8.687s 00:25:35.358 user 0m13.656s 00:25:35.358 sys 0m0.895s 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:35.358 08:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.358 08:52:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:35.358 08:52:10 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:25:35.358 08:52:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:35.358 08:52:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.358 08:52:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:35.358 ************************************ 00:25:35.358 START TEST raid_write_error_test 00:25:35.358 ************************************ 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jOLShlDoBf 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138776 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138776 /var/tmp/spdk-raid.sock 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 138776 ']' 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.358 08:52:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.358 [2024-07-12 08:52:10.312523] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:25:35.358 [2024-07-12 08:52:10.313010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138776 ] 00:25:35.358 [2024-07-12 08:52:10.479685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.615 [2024-07-12 08:52:10.653005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.871 [2024-07-12 08:52:10.834031] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:36.128 08:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.128 08:52:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:36.128 08:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:36.128 08:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:36.385 BaseBdev1_malloc 00:25:36.385 08:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:36.642 true 00:25:36.642 08:52:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:36.898 [2024-07-12 08:52:12.013944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:36.898 [2024-07-12 08:52:12.014167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.898 [2024-07-12 08:52:12.014240] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:36.898 [2024-07-12 08:52:12.014473] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.898 [2024-07-12 08:52:12.016777] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.898 [2024-07-12 08:52:12.016938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:36.898 BaseBdev1 00:25:36.898 08:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:36.898 08:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:37.155 BaseBdev2_malloc 00:25:37.155 08:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:37.412 true 00:25:37.412 08:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:37.669 [2024-07-12 08:52:12.797265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:37.669 [2024-07-12 08:52:12.797525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.669 [2024-07-12 08:52:12.797708] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:37.669 [2024-07-12 08:52:12.797834] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.669 [2024-07-12 08:52:12.800222] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.669 [2024-07-12 08:52:12.800407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:37.669 BaseBdev2 00:25:37.669 08:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:37.669 08:52:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:37.927 BaseBdev3_malloc 00:25:37.927 08:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:38.184 true 00:25:38.184 08:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:38.441 [2024-07-12 08:52:13.576752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:38.441 [2024-07-12 08:52:13.576995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.441 [2024-07-12 08:52:13.577147] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:38.441 [2024-07-12 08:52:13.577286] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.441 [2024-07-12 08:52:13.579915] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.441 [2024-07-12 08:52:13.580012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:38.441 BaseBdev3 00:25:38.441 08:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:38.441 08:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:38.699 BaseBdev4_malloc 00:25:38.699 08:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:38.956 true 00:25:38.956 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:39.213 [2024-07-12 08:52:14.348038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:39.213 [2024-07-12 08:52:14.348293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.213 [2024-07-12 08:52:14.348492] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:39.213 [2024-07-12 08:52:14.348663] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.213 [2024-07-12 08:52:14.350928] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.213 [2024-07-12 08:52:14.351091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:39.213 BaseBdev4 00:25:39.213 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:39.473 [2024-07-12 08:52:14.600183] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:39.473 [2024-07-12 08:52:14.602390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:39.473 [2024-07-12 08:52:14.602627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:39.473 [2024-07-12 08:52:14.602851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:39.473 [2024-07-12 08:52:14.603255] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:39.473 [2024-07-12 08:52:14.603375] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:39.473 [2024-07-12 08:52:14.603560] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:39.473 [2024-07-12 08:52:14.603989] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:39.473 [2024-07-12 08:52:14.604110] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:39.473 [2024-07-12 08:52:14.604435] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.473 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.744 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.744 "name": "raid_bdev1", 00:25:39.744 "uuid": "d95b90eb-f110-4d3f-8376-fb8fa1964411", 00:25:39.744 "strip_size_kb": 64, 00:25:39.744 "state": "online", 00:25:39.744 "raid_level": "raid0", 00:25:39.744 "superblock": true, 00:25:39.744 "num_base_bdevs": 4, 00:25:39.744 "num_base_bdevs_discovered": 4, 00:25:39.744 "num_base_bdevs_operational": 4, 00:25:39.744 "base_bdevs_list": [ 00:25:39.744 { 00:25:39.744 "name": "BaseBdev1", 00:25:39.744 "uuid": "e547b22e-1be3-550f-9d24-cf63c18780da", 00:25:39.744 "is_configured": true, 00:25:39.744 "data_offset": 2048, 00:25:39.744 "data_size": 63488 00:25:39.744 }, 00:25:39.744 { 00:25:39.744 "name": "BaseBdev2", 00:25:39.744 "uuid": "dd2260b4-61a5-530e-829f-c6b3a8642900", 00:25:39.744 "is_configured": true, 00:25:39.744 "data_offset": 2048, 00:25:39.744 "data_size": 63488 00:25:39.744 }, 00:25:39.744 { 00:25:39.744 "name": "BaseBdev3", 00:25:39.744 "uuid": "96934881-d934-55b3-944d-148b170ea077", 00:25:39.744 "is_configured": true, 00:25:39.744 "data_offset": 2048, 00:25:39.744 "data_size": 63488 00:25:39.744 }, 00:25:39.744 { 00:25:39.744 "name": "BaseBdev4", 00:25:39.744 "uuid": "8c88c81c-47b8-5e6f-b963-8fe72386dfda", 00:25:39.744 "is_configured": true, 00:25:39.744 "data_offset": 2048, 00:25:39.744 "data_size": 63488 00:25:39.744 } 00:25:39.744 ] 00:25:39.744 }' 00:25:39.744 08:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.745 08:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.326 08:52:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:40.326 08:52:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:40.583 [2024-07-12 08:52:15.562069] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:41.517 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.774 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.775 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.033 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.033 "name": "raid_bdev1", 00:25:42.033 "uuid": "d95b90eb-f110-4d3f-8376-fb8fa1964411", 00:25:42.033 "strip_size_kb": 64, 00:25:42.033 "state": "online", 00:25:42.033 "raid_level": "raid0", 00:25:42.033 "superblock": true, 00:25:42.033 "num_base_bdevs": 4, 00:25:42.033 "num_base_bdevs_discovered": 4, 00:25:42.033 "num_base_bdevs_operational": 4, 00:25:42.033 "base_bdevs_list": [ 00:25:42.033 { 00:25:42.033 "name": "BaseBdev1", 00:25:42.033 "uuid": "e547b22e-1be3-550f-9d24-cf63c18780da", 00:25:42.033 "is_configured": true, 00:25:42.033 "data_offset": 2048, 00:25:42.033 "data_size": 63488 00:25:42.033 }, 00:25:42.033 { 00:25:42.033 "name": "BaseBdev2", 00:25:42.033 "uuid": "dd2260b4-61a5-530e-829f-c6b3a8642900", 00:25:42.033 "is_configured": true, 00:25:42.033 "data_offset": 2048, 00:25:42.033 "data_size": 63488 00:25:42.033 }, 00:25:42.033 { 00:25:42.033 "name": "BaseBdev3", 00:25:42.033 "uuid": "96934881-d934-55b3-944d-148b170ea077", 00:25:42.033 "is_configured": true, 00:25:42.033 "data_offset": 2048, 00:25:42.033 "data_size": 63488 00:25:42.033 }, 00:25:42.033 { 00:25:42.033 "name": "BaseBdev4", 00:25:42.033 "uuid": "8c88c81c-47b8-5e6f-b963-8fe72386dfda", 00:25:42.033 "is_configured": true, 00:25:42.033 "data_offset": 2048, 00:25:42.033 "data_size": 63488 00:25:42.033 } 00:25:42.033 ] 00:25:42.033 }' 00:25:42.033 08:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.033 08:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.599 08:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:42.859 [2024-07-12 08:52:17.901829] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:42.859 [2024-07-12 08:52:17.902027] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:42.859 [2024-07-12 08:52:17.904753] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.859 [2024-07-12 08:52:17.904969] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.859 [2024-07-12 08:52:17.905047] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:42.859 [2024-07-12 08:52:17.905187] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:42.859 0 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138776 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 138776 ']' 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 138776 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138776 00:25:42.859 killing process with pid 138776 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138776' 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 138776 00:25:42.859 08:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 138776 00:25:42.859 [2024-07-12 08:52:17.933178] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:43.118 [2024-07-12 08:52:18.144933] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jOLShlDoBf 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:44.494 ************************************ 00:25:44.494 END TEST raid_write_error_test 00:25:44.494 ************************************ 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:25:44.494 00:25:44.494 real 0m9.073s 00:25:44.494 user 0m14.276s 00:25:44.494 sys 0m0.932s 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.494 08:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.494 08:52:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:44.494 08:52:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:25:44.494 08:52:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:25:44.494 08:52:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:44.494 08:52:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.494 08:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:44.494 ************************************ 00:25:44.494 START TEST raid_state_function_test 00:25:44.494 ************************************ 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=139006 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:44.494 Process raid pid: 139006 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 139006' 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 139006 /var/tmp/spdk-raid.sock 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 139006 ']' 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:44.494 08:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.495 08:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:44.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:44.495 08:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.495 08:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.495 [2024-07-12 08:52:19.432116] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:25:44.495 [2024-07-12 08:52:19.432548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.495 [2024-07-12 08:52:19.598713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.753 [2024-07-12 08:52:19.770627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.012 [2024-07-12 08:52:19.952966] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.271 08:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:45.271 08:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:25:45.271 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:45.529 [2024-07-12 08:52:20.582182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:45.529 [2024-07-12 08:52:20.582492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:45.529 [2024-07-12 08:52:20.582598] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:45.529 [2024-07-12 08:52:20.582656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:45.529 [2024-07-12 08:52:20.582760] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:45.529 [2024-07-12 08:52:20.582809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:45.529 [2024-07-12 08:52:20.582836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:45.529 [2024-07-12 08:52:20.582941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.529 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.530 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.530 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.788 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.788 "name": "Existed_Raid", 00:25:45.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.788 "strip_size_kb": 64, 00:25:45.788 "state": "configuring", 00:25:45.788 "raid_level": "concat", 00:25:45.788 "superblock": false, 00:25:45.788 "num_base_bdevs": 4, 00:25:45.788 "num_base_bdevs_discovered": 0, 00:25:45.788 "num_base_bdevs_operational": 4, 00:25:45.788 "base_bdevs_list": [ 00:25:45.788 { 00:25:45.788 "name": "BaseBdev1", 00:25:45.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.788 "is_configured": false, 00:25:45.788 "data_offset": 0, 00:25:45.788 "data_size": 0 00:25:45.788 }, 00:25:45.788 { 00:25:45.788 "name": "BaseBdev2", 00:25:45.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.788 "is_configured": false, 00:25:45.788 "data_offset": 0, 00:25:45.788 "data_size": 0 00:25:45.788 }, 00:25:45.788 { 00:25:45.788 "name": "BaseBdev3", 00:25:45.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.788 "is_configured": false, 00:25:45.788 "data_offset": 0, 00:25:45.788 "data_size": 0 00:25:45.788 }, 00:25:45.788 { 00:25:45.788 "name": "BaseBdev4", 00:25:45.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.788 "is_configured": false, 00:25:45.788 "data_offset": 0, 00:25:45.788 "data_size": 0 00:25:45.788 } 00:25:45.788 ] 00:25:45.788 }' 00:25:45.788 08:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.788 08:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.377 08:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:46.636 [2024-07-12 08:52:21.662373] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:46.636 [2024-07-12 08:52:21.662605] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:46.636 08:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:46.894 [2024-07-12 08:52:21.934528] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:46.894 [2024-07-12 08:52:21.934782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:46.894 [2024-07-12 08:52:21.934905] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:46.894 [2024-07-12 08:52:21.934991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:46.894 [2024-07-12 08:52:21.935161] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:46.894 [2024-07-12 08:52:21.935234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:46.894 [2024-07-12 08:52:21.935388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:46.894 [2024-07-12 08:52:21.935445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:46.894 08:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:47.153 [2024-07-12 08:52:22.221159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:47.153 BaseBdev1 00:25:47.153 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:47.153 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:47.153 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:47.153 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:47.153 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:47.153 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:47.153 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:47.412 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:47.671 [ 00:25:47.671 { 00:25:47.671 "name": "BaseBdev1", 00:25:47.671 "aliases": [ 00:25:47.671 "c198399e-7d2c-4544-b8c7-fd013409bc56" 00:25:47.671 ], 00:25:47.671 "product_name": "Malloc disk", 00:25:47.671 "block_size": 512, 00:25:47.671 "num_blocks": 65536, 00:25:47.671 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:47.671 "assigned_rate_limits": { 00:25:47.671 "rw_ios_per_sec": 0, 00:25:47.671 "rw_mbytes_per_sec": 0, 00:25:47.671 "r_mbytes_per_sec": 0, 00:25:47.671 "w_mbytes_per_sec": 0 00:25:47.671 }, 00:25:47.671 "claimed": true, 00:25:47.671 "claim_type": "exclusive_write", 00:25:47.671 "zoned": false, 00:25:47.671 "supported_io_types": { 00:25:47.671 "read": true, 00:25:47.671 "write": true, 00:25:47.671 "unmap": true, 00:25:47.671 "flush": true, 00:25:47.671 "reset": true, 00:25:47.671 "nvme_admin": false, 00:25:47.671 "nvme_io": false, 00:25:47.671 "nvme_io_md": false, 00:25:47.671 "write_zeroes": true, 00:25:47.671 "zcopy": true, 00:25:47.671 "get_zone_info": false, 00:25:47.671 "zone_management": false, 00:25:47.671 "zone_append": false, 00:25:47.671 "compare": false, 00:25:47.671 "compare_and_write": false, 00:25:47.671 "abort": true, 00:25:47.671 "seek_hole": false, 00:25:47.671 "seek_data": false, 00:25:47.671 "copy": true, 00:25:47.671 "nvme_iov_md": false 00:25:47.671 }, 00:25:47.671 "memory_domains": [ 00:25:47.671 { 00:25:47.671 "dma_device_id": "system", 00:25:47.671 "dma_device_type": 1 00:25:47.671 }, 00:25:47.671 { 00:25:47.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.671 "dma_device_type": 2 00:25:47.671 } 00:25:47.671 ], 00:25:47.671 "driver_specific": {} 00:25:47.671 } 00:25:47.671 ] 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:47.671 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.672 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.672 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.672 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.672 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.672 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.930 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:47.930 "name": "Existed_Raid", 00:25:47.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.930 "strip_size_kb": 64, 00:25:47.930 "state": "configuring", 00:25:47.930 "raid_level": "concat", 00:25:47.930 "superblock": false, 00:25:47.930 "num_base_bdevs": 4, 00:25:47.930 "num_base_bdevs_discovered": 1, 00:25:47.930 "num_base_bdevs_operational": 4, 00:25:47.930 "base_bdevs_list": [ 00:25:47.930 { 00:25:47.930 "name": "BaseBdev1", 00:25:47.930 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:47.930 "is_configured": true, 00:25:47.930 "data_offset": 0, 00:25:47.930 "data_size": 65536 00:25:47.930 }, 00:25:47.930 { 00:25:47.930 "name": "BaseBdev2", 00:25:47.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.930 "is_configured": false, 00:25:47.930 "data_offset": 0, 00:25:47.930 "data_size": 0 00:25:47.930 }, 00:25:47.930 { 00:25:47.930 "name": "BaseBdev3", 00:25:47.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.930 "is_configured": false, 00:25:47.930 "data_offset": 0, 00:25:47.930 "data_size": 0 00:25:47.930 }, 00:25:47.931 { 00:25:47.931 "name": "BaseBdev4", 00:25:47.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.931 "is_configured": false, 00:25:47.931 "data_offset": 0, 00:25:47.931 "data_size": 0 00:25:47.931 } 00:25:47.931 ] 00:25:47.931 }' 00:25:47.931 08:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:47.931 08:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.497 08:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:48.755 [2024-07-12 08:52:23.817583] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:48.755 [2024-07-12 08:52:23.817842] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:25:48.755 08:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:49.014 [2024-07-12 08:52:24.021633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:49.014 [2024-07-12 08:52:24.023420] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:49.014 [2024-07-12 08:52:24.023605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:49.014 [2024-07-12 08:52:24.023718] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:49.014 [2024-07-12 08:52:24.023833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:49.014 [2024-07-12 08:52:24.023921] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:49.014 [2024-07-12 08:52:24.024041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:49.014 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:49.015 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.015 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.273 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.273 "name": "Existed_Raid", 00:25:49.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.273 "strip_size_kb": 64, 00:25:49.273 "state": "configuring", 00:25:49.273 "raid_level": "concat", 00:25:49.273 "superblock": false, 00:25:49.273 "num_base_bdevs": 4, 00:25:49.273 "num_base_bdevs_discovered": 1, 00:25:49.273 "num_base_bdevs_operational": 4, 00:25:49.273 "base_bdevs_list": [ 00:25:49.273 { 00:25:49.273 "name": "BaseBdev1", 00:25:49.273 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:49.273 "is_configured": true, 00:25:49.273 "data_offset": 0, 00:25:49.273 "data_size": 65536 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "name": "BaseBdev2", 00:25:49.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.273 "is_configured": false, 00:25:49.273 "data_offset": 0, 00:25:49.273 "data_size": 0 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "name": "BaseBdev3", 00:25:49.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.273 "is_configured": false, 00:25:49.273 "data_offset": 0, 00:25:49.273 "data_size": 0 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "name": "BaseBdev4", 00:25:49.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.273 "is_configured": false, 00:25:49.273 "data_offset": 0, 00:25:49.273 "data_size": 0 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 }' 00:25:49.273 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.273 08:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.840 08:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:50.099 [2024-07-12 08:52:25.194549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:50.099 BaseBdev2 00:25:50.099 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:50.099 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:50.099 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:50.100 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:50.100 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:50.100 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:50.100 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.358 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:50.617 [ 00:25:50.617 { 00:25:50.617 "name": "BaseBdev2", 00:25:50.617 "aliases": [ 00:25:50.617 "fcf5e6c9-2aa1-4557-a233-21769c95d22f" 00:25:50.617 ], 00:25:50.617 "product_name": "Malloc disk", 00:25:50.617 "block_size": 512, 00:25:50.617 "num_blocks": 65536, 00:25:50.617 "uuid": "fcf5e6c9-2aa1-4557-a233-21769c95d22f", 00:25:50.617 "assigned_rate_limits": { 00:25:50.617 "rw_ios_per_sec": 0, 00:25:50.617 "rw_mbytes_per_sec": 0, 00:25:50.618 "r_mbytes_per_sec": 0, 00:25:50.618 "w_mbytes_per_sec": 0 00:25:50.618 }, 00:25:50.618 "claimed": true, 00:25:50.618 "claim_type": "exclusive_write", 00:25:50.618 "zoned": false, 00:25:50.618 "supported_io_types": { 00:25:50.618 "read": true, 00:25:50.618 "write": true, 00:25:50.618 "unmap": true, 00:25:50.618 "flush": true, 00:25:50.618 "reset": true, 00:25:50.618 "nvme_admin": false, 00:25:50.618 "nvme_io": false, 00:25:50.618 "nvme_io_md": false, 00:25:50.618 "write_zeroes": true, 00:25:50.618 "zcopy": true, 00:25:50.618 "get_zone_info": false, 00:25:50.618 "zone_management": false, 00:25:50.618 "zone_append": false, 00:25:50.618 "compare": false, 00:25:50.618 "compare_and_write": false, 00:25:50.618 "abort": true, 00:25:50.618 "seek_hole": false, 00:25:50.618 "seek_data": false, 00:25:50.618 "copy": true, 00:25:50.618 "nvme_iov_md": false 00:25:50.618 }, 00:25:50.618 "memory_domains": [ 00:25:50.618 { 00:25:50.618 "dma_device_id": "system", 00:25:50.618 "dma_device_type": 1 00:25:50.618 }, 00:25:50.618 { 00:25:50.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.618 "dma_device_type": 2 00:25:50.618 } 00:25:50.618 ], 00:25:50.618 "driver_specific": {} 00:25:50.618 } 00:25:50.618 ] 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.618 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.891 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.891 "name": "Existed_Raid", 00:25:50.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.891 "strip_size_kb": 64, 00:25:50.891 "state": "configuring", 00:25:50.891 "raid_level": "concat", 00:25:50.891 "superblock": false, 00:25:50.891 "num_base_bdevs": 4, 00:25:50.891 "num_base_bdevs_discovered": 2, 00:25:50.891 "num_base_bdevs_operational": 4, 00:25:50.891 "base_bdevs_list": [ 00:25:50.891 { 00:25:50.891 "name": "BaseBdev1", 00:25:50.891 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:50.891 "is_configured": true, 00:25:50.891 "data_offset": 0, 00:25:50.891 "data_size": 65536 00:25:50.891 }, 00:25:50.891 { 00:25:50.891 "name": "BaseBdev2", 00:25:50.891 "uuid": "fcf5e6c9-2aa1-4557-a233-21769c95d22f", 00:25:50.891 "is_configured": true, 00:25:50.891 "data_offset": 0, 00:25:50.891 "data_size": 65536 00:25:50.891 }, 00:25:50.891 { 00:25:50.891 "name": "BaseBdev3", 00:25:50.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.891 "is_configured": false, 00:25:50.891 "data_offset": 0, 00:25:50.891 "data_size": 0 00:25:50.891 }, 00:25:50.891 { 00:25:50.891 "name": "BaseBdev4", 00:25:50.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.891 "is_configured": false, 00:25:50.891 "data_offset": 0, 00:25:50.891 "data_size": 0 00:25:50.891 } 00:25:50.891 ] 00:25:50.891 }' 00:25:50.892 08:52:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.892 08:52:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.482 08:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:51.740 [2024-07-12 08:52:26.786930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:51.740 BaseBdev3 00:25:51.740 08:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:51.740 08:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:51.740 08:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:51.740 08:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:51.740 08:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:51.740 08:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:51.740 08:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:51.998 08:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:52.257 [ 00:25:52.257 { 00:25:52.257 "name": "BaseBdev3", 00:25:52.257 "aliases": [ 00:25:52.257 "d4f7e1e4-0661-4afb-9875-084cf6d97524" 00:25:52.257 ], 00:25:52.257 "product_name": "Malloc disk", 00:25:52.257 "block_size": 512, 00:25:52.257 "num_blocks": 65536, 00:25:52.257 "uuid": "d4f7e1e4-0661-4afb-9875-084cf6d97524", 00:25:52.257 "assigned_rate_limits": { 00:25:52.257 "rw_ios_per_sec": 0, 00:25:52.257 "rw_mbytes_per_sec": 0, 00:25:52.257 "r_mbytes_per_sec": 0, 00:25:52.257 "w_mbytes_per_sec": 0 00:25:52.257 }, 00:25:52.257 "claimed": true, 00:25:52.257 "claim_type": "exclusive_write", 00:25:52.257 "zoned": false, 00:25:52.257 "supported_io_types": { 00:25:52.257 "read": true, 00:25:52.257 "write": true, 00:25:52.257 "unmap": true, 00:25:52.257 "flush": true, 00:25:52.257 "reset": true, 00:25:52.257 "nvme_admin": false, 00:25:52.257 "nvme_io": false, 00:25:52.257 "nvme_io_md": false, 00:25:52.257 "write_zeroes": true, 00:25:52.257 "zcopy": true, 00:25:52.257 "get_zone_info": false, 00:25:52.257 "zone_management": false, 00:25:52.257 "zone_append": false, 00:25:52.257 "compare": false, 00:25:52.257 "compare_and_write": false, 00:25:52.257 "abort": true, 00:25:52.257 "seek_hole": false, 00:25:52.257 "seek_data": false, 00:25:52.257 "copy": true, 00:25:52.257 "nvme_iov_md": false 00:25:52.257 }, 00:25:52.257 "memory_domains": [ 00:25:52.257 { 00:25:52.257 "dma_device_id": "system", 00:25:52.257 "dma_device_type": 1 00:25:52.257 }, 00:25:52.257 { 00:25:52.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.257 "dma_device_type": 2 00:25:52.257 } 00:25:52.257 ], 00:25:52.257 "driver_specific": {} 00:25:52.257 } 00:25:52.257 ] 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.257 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.516 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.516 "name": "Existed_Raid", 00:25:52.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.516 "strip_size_kb": 64, 00:25:52.516 "state": "configuring", 00:25:52.516 "raid_level": "concat", 00:25:52.516 "superblock": false, 00:25:52.516 "num_base_bdevs": 4, 00:25:52.516 "num_base_bdevs_discovered": 3, 00:25:52.516 "num_base_bdevs_operational": 4, 00:25:52.516 "base_bdevs_list": [ 00:25:52.516 { 00:25:52.516 "name": "BaseBdev1", 00:25:52.516 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:52.516 "is_configured": true, 00:25:52.516 "data_offset": 0, 00:25:52.516 "data_size": 65536 00:25:52.516 }, 00:25:52.516 { 00:25:52.516 "name": "BaseBdev2", 00:25:52.516 "uuid": "fcf5e6c9-2aa1-4557-a233-21769c95d22f", 00:25:52.516 "is_configured": true, 00:25:52.516 "data_offset": 0, 00:25:52.516 "data_size": 65536 00:25:52.516 }, 00:25:52.516 { 00:25:52.516 "name": "BaseBdev3", 00:25:52.516 "uuid": "d4f7e1e4-0661-4afb-9875-084cf6d97524", 00:25:52.516 "is_configured": true, 00:25:52.516 "data_offset": 0, 00:25:52.516 "data_size": 65536 00:25:52.516 }, 00:25:52.516 { 00:25:52.516 "name": "BaseBdev4", 00:25:52.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.516 "is_configured": false, 00:25:52.516 "data_offset": 0, 00:25:52.516 "data_size": 0 00:25:52.516 } 00:25:52.516 ] 00:25:52.516 }' 00:25:52.516 08:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.516 08:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.083 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:53.340 [2024-07-12 08:52:28.379985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:53.340 [2024-07-12 08:52:28.380233] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:25:53.340 [2024-07-12 08:52:28.380271] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:53.340 [2024-07-12 08:52:28.380487] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:53.340 [2024-07-12 08:52:28.381003] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:25:53.340 [2024-07-12 08:52:28.381155] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:25:53.340 [2024-07-12 08:52:28.381560] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.340 BaseBdev4 00:25:53.340 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:53.340 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:53.340 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:53.340 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:53.340 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:53.340 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:53.341 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:53.597 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:53.597 [ 00:25:53.597 { 00:25:53.597 "name": "BaseBdev4", 00:25:53.597 "aliases": [ 00:25:53.597 "f127a4b6-feae-4d26-aa90-43d2459c934a" 00:25:53.597 ], 00:25:53.597 "product_name": "Malloc disk", 00:25:53.597 "block_size": 512, 00:25:53.597 "num_blocks": 65536, 00:25:53.597 "uuid": "f127a4b6-feae-4d26-aa90-43d2459c934a", 00:25:53.597 "assigned_rate_limits": { 00:25:53.597 "rw_ios_per_sec": 0, 00:25:53.597 "rw_mbytes_per_sec": 0, 00:25:53.597 "r_mbytes_per_sec": 0, 00:25:53.597 "w_mbytes_per_sec": 0 00:25:53.597 }, 00:25:53.597 "claimed": true, 00:25:53.597 "claim_type": "exclusive_write", 00:25:53.597 "zoned": false, 00:25:53.597 "supported_io_types": { 00:25:53.597 "read": true, 00:25:53.597 "write": true, 00:25:53.597 "unmap": true, 00:25:53.597 "flush": true, 00:25:53.597 "reset": true, 00:25:53.597 "nvme_admin": false, 00:25:53.597 "nvme_io": false, 00:25:53.597 "nvme_io_md": false, 00:25:53.597 "write_zeroes": true, 00:25:53.597 "zcopy": true, 00:25:53.597 "get_zone_info": false, 00:25:53.597 "zone_management": false, 00:25:53.597 "zone_append": false, 00:25:53.597 "compare": false, 00:25:53.597 "compare_and_write": false, 00:25:53.597 "abort": true, 00:25:53.597 "seek_hole": false, 00:25:53.597 "seek_data": false, 00:25:53.597 "copy": true, 00:25:53.597 "nvme_iov_md": false 00:25:53.597 }, 00:25:53.597 "memory_domains": [ 00:25:53.597 { 00:25:53.597 "dma_device_id": "system", 00:25:53.597 "dma_device_type": 1 00:25:53.597 }, 00:25:53.597 { 00:25:53.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.597 "dma_device_type": 2 00:25:53.597 } 00:25:53.597 ], 00:25:53.597 "driver_specific": {} 00:25:53.597 } 00:25:53.597 ] 00:25:53.597 08:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:53.597 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:53.597 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:53.597 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.598 08:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.854 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.854 "name": "Existed_Raid", 00:25:53.854 "uuid": "730c029a-1909-402e-8d19-3896cbfeefd2", 00:25:53.854 "strip_size_kb": 64, 00:25:53.854 "state": "online", 00:25:53.854 "raid_level": "concat", 00:25:53.854 "superblock": false, 00:25:53.854 "num_base_bdevs": 4, 00:25:53.854 "num_base_bdevs_discovered": 4, 00:25:53.854 "num_base_bdevs_operational": 4, 00:25:53.854 "base_bdevs_list": [ 00:25:53.854 { 00:25:53.854 "name": "BaseBdev1", 00:25:53.854 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:53.854 "is_configured": true, 00:25:53.854 "data_offset": 0, 00:25:53.854 "data_size": 65536 00:25:53.854 }, 00:25:53.854 { 00:25:53.854 "name": "BaseBdev2", 00:25:53.854 "uuid": "fcf5e6c9-2aa1-4557-a233-21769c95d22f", 00:25:53.854 "is_configured": true, 00:25:53.854 "data_offset": 0, 00:25:53.854 "data_size": 65536 00:25:53.854 }, 00:25:53.854 { 00:25:53.854 "name": "BaseBdev3", 00:25:53.854 "uuid": "d4f7e1e4-0661-4afb-9875-084cf6d97524", 00:25:53.854 "is_configured": true, 00:25:53.854 "data_offset": 0, 00:25:53.854 "data_size": 65536 00:25:53.854 }, 00:25:53.854 { 00:25:53.854 "name": "BaseBdev4", 00:25:53.854 "uuid": "f127a4b6-feae-4d26-aa90-43d2459c934a", 00:25:53.854 "is_configured": true, 00:25:53.854 "data_offset": 0, 00:25:53.854 "data_size": 65536 00:25:53.854 } 00:25:53.854 ] 00:25:53.854 }' 00:25:53.854 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.854 08:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:54.788 08:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:55.047 [2024-07-12 08:52:30.021183] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:55.047 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:55.047 "name": "Existed_Raid", 00:25:55.047 "aliases": [ 00:25:55.047 "730c029a-1909-402e-8d19-3896cbfeefd2" 00:25:55.047 ], 00:25:55.047 "product_name": "Raid Volume", 00:25:55.047 "block_size": 512, 00:25:55.047 "num_blocks": 262144, 00:25:55.047 "uuid": "730c029a-1909-402e-8d19-3896cbfeefd2", 00:25:55.047 "assigned_rate_limits": { 00:25:55.047 "rw_ios_per_sec": 0, 00:25:55.047 "rw_mbytes_per_sec": 0, 00:25:55.047 "r_mbytes_per_sec": 0, 00:25:55.047 "w_mbytes_per_sec": 0 00:25:55.047 }, 00:25:55.047 "claimed": false, 00:25:55.047 "zoned": false, 00:25:55.047 "supported_io_types": { 00:25:55.047 "read": true, 00:25:55.047 "write": true, 00:25:55.047 "unmap": true, 00:25:55.047 "flush": true, 00:25:55.047 "reset": true, 00:25:55.047 "nvme_admin": false, 00:25:55.047 "nvme_io": false, 00:25:55.047 "nvme_io_md": false, 00:25:55.047 "write_zeroes": true, 00:25:55.047 "zcopy": false, 00:25:55.047 "get_zone_info": false, 00:25:55.047 "zone_management": false, 00:25:55.047 "zone_append": false, 00:25:55.047 "compare": false, 00:25:55.047 "compare_and_write": false, 00:25:55.047 "abort": false, 00:25:55.047 "seek_hole": false, 00:25:55.047 "seek_data": false, 00:25:55.047 "copy": false, 00:25:55.047 "nvme_iov_md": false 00:25:55.047 }, 00:25:55.047 "memory_domains": [ 00:25:55.047 { 00:25:55.047 "dma_device_id": "system", 00:25:55.047 "dma_device_type": 1 00:25:55.047 }, 00:25:55.047 { 00:25:55.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.047 "dma_device_type": 2 00:25:55.047 }, 00:25:55.047 { 00:25:55.047 "dma_device_id": "system", 00:25:55.047 "dma_device_type": 1 00:25:55.047 }, 00:25:55.047 { 00:25:55.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.047 "dma_device_type": 2 00:25:55.047 }, 00:25:55.047 { 00:25:55.047 "dma_device_id": "system", 00:25:55.047 "dma_device_type": 1 00:25:55.047 }, 00:25:55.047 { 00:25:55.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.047 "dma_device_type": 2 00:25:55.047 }, 00:25:55.047 { 00:25:55.047 "dma_device_id": "system", 00:25:55.047 "dma_device_type": 1 00:25:55.047 }, 00:25:55.047 { 00:25:55.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.047 "dma_device_type": 2 00:25:55.047 } 00:25:55.047 ], 00:25:55.047 "driver_specific": { 00:25:55.047 "raid": { 00:25:55.047 "uuid": "730c029a-1909-402e-8d19-3896cbfeefd2", 00:25:55.047 "strip_size_kb": 64, 00:25:55.047 "state": "online", 00:25:55.047 "raid_level": "concat", 00:25:55.047 "superblock": false, 00:25:55.048 "num_base_bdevs": 4, 00:25:55.048 "num_base_bdevs_discovered": 4, 00:25:55.048 "num_base_bdevs_operational": 4, 00:25:55.048 "base_bdevs_list": [ 00:25:55.048 { 00:25:55.048 "name": "BaseBdev1", 00:25:55.048 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:55.048 "is_configured": true, 00:25:55.048 "data_offset": 0, 00:25:55.048 "data_size": 65536 00:25:55.048 }, 00:25:55.048 { 00:25:55.048 "name": "BaseBdev2", 00:25:55.048 "uuid": "fcf5e6c9-2aa1-4557-a233-21769c95d22f", 00:25:55.048 "is_configured": true, 00:25:55.048 "data_offset": 0, 00:25:55.048 "data_size": 65536 00:25:55.048 }, 00:25:55.048 { 00:25:55.048 "name": "BaseBdev3", 00:25:55.048 "uuid": "d4f7e1e4-0661-4afb-9875-084cf6d97524", 00:25:55.048 "is_configured": true, 00:25:55.048 "data_offset": 0, 00:25:55.048 "data_size": 65536 00:25:55.048 }, 00:25:55.048 { 00:25:55.048 "name": "BaseBdev4", 00:25:55.048 "uuid": "f127a4b6-feae-4d26-aa90-43d2459c934a", 00:25:55.048 "is_configured": true, 00:25:55.048 "data_offset": 0, 00:25:55.048 "data_size": 65536 00:25:55.048 } 00:25:55.048 ] 00:25:55.048 } 00:25:55.048 } 00:25:55.048 }' 00:25:55.048 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:55.048 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:55.048 BaseBdev2 00:25:55.048 BaseBdev3 00:25:55.048 BaseBdev4' 00:25:55.048 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:55.048 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:55.048 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:55.307 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:55.307 "name": "BaseBdev1", 00:25:55.307 "aliases": [ 00:25:55.307 "c198399e-7d2c-4544-b8c7-fd013409bc56" 00:25:55.307 ], 00:25:55.307 "product_name": "Malloc disk", 00:25:55.307 "block_size": 512, 00:25:55.307 "num_blocks": 65536, 00:25:55.307 "uuid": "c198399e-7d2c-4544-b8c7-fd013409bc56", 00:25:55.307 "assigned_rate_limits": { 00:25:55.307 "rw_ios_per_sec": 0, 00:25:55.307 "rw_mbytes_per_sec": 0, 00:25:55.307 "r_mbytes_per_sec": 0, 00:25:55.307 "w_mbytes_per_sec": 0 00:25:55.307 }, 00:25:55.307 "claimed": true, 00:25:55.307 "claim_type": "exclusive_write", 00:25:55.307 "zoned": false, 00:25:55.307 "supported_io_types": { 00:25:55.307 "read": true, 00:25:55.307 "write": true, 00:25:55.307 "unmap": true, 00:25:55.307 "flush": true, 00:25:55.307 "reset": true, 00:25:55.307 "nvme_admin": false, 00:25:55.307 "nvme_io": false, 00:25:55.307 "nvme_io_md": false, 00:25:55.307 "write_zeroes": true, 00:25:55.307 "zcopy": true, 00:25:55.307 "get_zone_info": false, 00:25:55.307 "zone_management": false, 00:25:55.307 "zone_append": false, 00:25:55.307 "compare": false, 00:25:55.307 "compare_and_write": false, 00:25:55.307 "abort": true, 00:25:55.307 "seek_hole": false, 00:25:55.307 "seek_data": false, 00:25:55.307 "copy": true, 00:25:55.307 "nvme_iov_md": false 00:25:55.307 }, 00:25:55.307 "memory_domains": [ 00:25:55.307 { 00:25:55.307 "dma_device_id": "system", 00:25:55.307 "dma_device_type": 1 00:25:55.307 }, 00:25:55.307 { 00:25:55.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.307 "dma_device_type": 2 00:25:55.307 } 00:25:55.307 ], 00:25:55.307 "driver_specific": {} 00:25:55.307 }' 00:25:55.307 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:55.307 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:55.307 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:55.307 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:55.565 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:55.565 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:55.565 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:55.565 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:55.565 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:55.565 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:55.565 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:55.823 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:55.823 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:55.823 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:55.823 08:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:56.082 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:56.082 "name": "BaseBdev2", 00:25:56.082 "aliases": [ 00:25:56.082 "fcf5e6c9-2aa1-4557-a233-21769c95d22f" 00:25:56.082 ], 00:25:56.082 "product_name": "Malloc disk", 00:25:56.082 "block_size": 512, 00:25:56.082 "num_blocks": 65536, 00:25:56.082 "uuid": "fcf5e6c9-2aa1-4557-a233-21769c95d22f", 00:25:56.082 "assigned_rate_limits": { 00:25:56.082 "rw_ios_per_sec": 0, 00:25:56.082 "rw_mbytes_per_sec": 0, 00:25:56.082 "r_mbytes_per_sec": 0, 00:25:56.082 "w_mbytes_per_sec": 0 00:25:56.082 }, 00:25:56.082 "claimed": true, 00:25:56.082 "claim_type": "exclusive_write", 00:25:56.082 "zoned": false, 00:25:56.082 "supported_io_types": { 00:25:56.082 "read": true, 00:25:56.082 "write": true, 00:25:56.082 "unmap": true, 00:25:56.082 "flush": true, 00:25:56.082 "reset": true, 00:25:56.082 "nvme_admin": false, 00:25:56.082 "nvme_io": false, 00:25:56.082 "nvme_io_md": false, 00:25:56.082 "write_zeroes": true, 00:25:56.082 "zcopy": true, 00:25:56.082 "get_zone_info": false, 00:25:56.082 "zone_management": false, 00:25:56.082 "zone_append": false, 00:25:56.082 "compare": false, 00:25:56.082 "compare_and_write": false, 00:25:56.082 "abort": true, 00:25:56.082 "seek_hole": false, 00:25:56.082 "seek_data": false, 00:25:56.082 "copy": true, 00:25:56.082 "nvme_iov_md": false 00:25:56.082 }, 00:25:56.082 "memory_domains": [ 00:25:56.082 { 00:25:56.082 "dma_device_id": "system", 00:25:56.082 "dma_device_type": 1 00:25:56.082 }, 00:25:56.082 { 00:25:56.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.082 "dma_device_type": 2 00:25:56.082 } 00:25:56.082 ], 00:25:56.082 "driver_specific": {} 00:25:56.082 }' 00:25:56.082 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.082 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.082 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:56.082 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.082 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:56.340 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:56.598 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:56.598 "name": "BaseBdev3", 00:25:56.598 "aliases": [ 00:25:56.598 "d4f7e1e4-0661-4afb-9875-084cf6d97524" 00:25:56.598 ], 00:25:56.598 "product_name": "Malloc disk", 00:25:56.598 "block_size": 512, 00:25:56.598 "num_blocks": 65536, 00:25:56.598 "uuid": "d4f7e1e4-0661-4afb-9875-084cf6d97524", 00:25:56.598 "assigned_rate_limits": { 00:25:56.598 "rw_ios_per_sec": 0, 00:25:56.598 "rw_mbytes_per_sec": 0, 00:25:56.599 "r_mbytes_per_sec": 0, 00:25:56.599 "w_mbytes_per_sec": 0 00:25:56.599 }, 00:25:56.599 "claimed": true, 00:25:56.599 "claim_type": "exclusive_write", 00:25:56.599 "zoned": false, 00:25:56.599 "supported_io_types": { 00:25:56.599 "read": true, 00:25:56.599 "write": true, 00:25:56.599 "unmap": true, 00:25:56.599 "flush": true, 00:25:56.599 "reset": true, 00:25:56.599 "nvme_admin": false, 00:25:56.599 "nvme_io": false, 00:25:56.599 "nvme_io_md": false, 00:25:56.599 "write_zeroes": true, 00:25:56.599 "zcopy": true, 00:25:56.599 "get_zone_info": false, 00:25:56.599 "zone_management": false, 00:25:56.599 "zone_append": false, 00:25:56.599 "compare": false, 00:25:56.599 "compare_and_write": false, 00:25:56.599 "abort": true, 00:25:56.599 "seek_hole": false, 00:25:56.599 "seek_data": false, 00:25:56.599 "copy": true, 00:25:56.599 "nvme_iov_md": false 00:25:56.599 }, 00:25:56.599 "memory_domains": [ 00:25:56.599 { 00:25:56.599 "dma_device_id": "system", 00:25:56.599 "dma_device_type": 1 00:25:56.599 }, 00:25:56.599 { 00:25:56.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.599 "dma_device_type": 2 00:25:56.599 } 00:25:56.599 ], 00:25:56.599 "driver_specific": {} 00:25:56.599 }' 00:25:56.599 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.599 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:56.857 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:56.857 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.857 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:56.857 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:56.857 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.857 08:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:56.857 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:56.857 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.115 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.115 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:57.115 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:57.115 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:57.115 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:57.373 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:57.373 "name": "BaseBdev4", 00:25:57.373 "aliases": [ 00:25:57.373 "f127a4b6-feae-4d26-aa90-43d2459c934a" 00:25:57.373 ], 00:25:57.373 "product_name": "Malloc disk", 00:25:57.373 "block_size": 512, 00:25:57.373 "num_blocks": 65536, 00:25:57.373 "uuid": "f127a4b6-feae-4d26-aa90-43d2459c934a", 00:25:57.373 "assigned_rate_limits": { 00:25:57.373 "rw_ios_per_sec": 0, 00:25:57.373 "rw_mbytes_per_sec": 0, 00:25:57.373 "r_mbytes_per_sec": 0, 00:25:57.373 "w_mbytes_per_sec": 0 00:25:57.373 }, 00:25:57.373 "claimed": true, 00:25:57.373 "claim_type": "exclusive_write", 00:25:57.373 "zoned": false, 00:25:57.373 "supported_io_types": { 00:25:57.373 "read": true, 00:25:57.373 "write": true, 00:25:57.373 "unmap": true, 00:25:57.373 "flush": true, 00:25:57.373 "reset": true, 00:25:57.373 "nvme_admin": false, 00:25:57.373 "nvme_io": false, 00:25:57.373 "nvme_io_md": false, 00:25:57.373 "write_zeroes": true, 00:25:57.373 "zcopy": true, 00:25:57.373 "get_zone_info": false, 00:25:57.373 "zone_management": false, 00:25:57.373 "zone_append": false, 00:25:57.374 "compare": false, 00:25:57.374 "compare_and_write": false, 00:25:57.374 "abort": true, 00:25:57.374 "seek_hole": false, 00:25:57.374 "seek_data": false, 00:25:57.374 "copy": true, 00:25:57.374 "nvme_iov_md": false 00:25:57.374 }, 00:25:57.374 "memory_domains": [ 00:25:57.374 { 00:25:57.374 "dma_device_id": "system", 00:25:57.374 "dma_device_type": 1 00:25:57.374 }, 00:25:57.374 { 00:25:57.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.374 "dma_device_type": 2 00:25:57.374 } 00:25:57.374 ], 00:25:57.374 "driver_specific": {} 00:25:57.374 }' 00:25:57.374 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:57.374 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:57.374 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:57.374 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.374 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:57.632 08:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:57.890 [2024-07-12 08:52:33.002076] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.890 [2024-07-12 08:52:33.002377] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:57.890 [2024-07-12 08:52:33.002711] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.147 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.147 "name": "Existed_Raid", 00:25:58.147 "uuid": "730c029a-1909-402e-8d19-3896cbfeefd2", 00:25:58.147 "strip_size_kb": 64, 00:25:58.147 "state": "offline", 00:25:58.147 "raid_level": "concat", 00:25:58.147 "superblock": false, 00:25:58.147 "num_base_bdevs": 4, 00:25:58.147 "num_base_bdevs_discovered": 3, 00:25:58.147 "num_base_bdevs_operational": 3, 00:25:58.147 "base_bdevs_list": [ 00:25:58.147 { 00:25:58.147 "name": null, 00:25:58.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.147 "is_configured": false, 00:25:58.147 "data_offset": 0, 00:25:58.147 "data_size": 65536 00:25:58.147 }, 00:25:58.147 { 00:25:58.147 "name": "BaseBdev2", 00:25:58.147 "uuid": "fcf5e6c9-2aa1-4557-a233-21769c95d22f", 00:25:58.147 "is_configured": true, 00:25:58.147 "data_offset": 0, 00:25:58.147 "data_size": 65536 00:25:58.147 }, 00:25:58.147 { 00:25:58.147 "name": "BaseBdev3", 00:25:58.148 "uuid": "d4f7e1e4-0661-4afb-9875-084cf6d97524", 00:25:58.148 "is_configured": true, 00:25:58.148 "data_offset": 0, 00:25:58.148 "data_size": 65536 00:25:58.148 }, 00:25:58.148 { 00:25:58.148 "name": "BaseBdev4", 00:25:58.148 "uuid": "f127a4b6-feae-4d26-aa90-43d2459c934a", 00:25:58.148 "is_configured": true, 00:25:58.148 "data_offset": 0, 00:25:58.148 "data_size": 65536 00:25:58.148 } 00:25:58.148 ] 00:25:58.148 }' 00:25:58.148 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.148 08:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.081 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:59.081 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:59.081 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:59.081 08:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.081 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:59.081 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:59.081 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:59.339 [2024-07-12 08:52:34.437938] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:59.339 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:59.339 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:59.339 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.339 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:59.904 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:59.904 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:59.904 08:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:59.904 [2024-07-12 08:52:35.053350] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:00.162 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:00.162 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:00.162 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.162 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:00.428 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:00.428 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:00.428 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:00.719 [2024-07-12 08:52:35.629997] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:00.719 [2024-07-12 08:52:35.630200] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:26:00.719 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:00.719 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:00.719 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.719 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:00.990 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:00.990 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:00.990 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:00.990 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:00.990 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:00.990 08:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:01.248 BaseBdev2 00:26:01.248 08:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:01.248 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:01.248 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:01.248 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:01.248 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:01.248 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:01.248 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:01.507 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:01.507 [ 00:26:01.507 { 00:26:01.507 "name": "BaseBdev2", 00:26:01.507 "aliases": [ 00:26:01.507 "9df1c151-a8d0-4442-80a9-91551bfe9923" 00:26:01.507 ], 00:26:01.507 "product_name": "Malloc disk", 00:26:01.507 "block_size": 512, 00:26:01.507 "num_blocks": 65536, 00:26:01.507 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:01.507 "assigned_rate_limits": { 00:26:01.507 "rw_ios_per_sec": 0, 00:26:01.507 "rw_mbytes_per_sec": 0, 00:26:01.507 "r_mbytes_per_sec": 0, 00:26:01.507 "w_mbytes_per_sec": 0 00:26:01.507 }, 00:26:01.507 "claimed": false, 00:26:01.507 "zoned": false, 00:26:01.507 "supported_io_types": { 00:26:01.507 "read": true, 00:26:01.507 "write": true, 00:26:01.507 "unmap": true, 00:26:01.507 "flush": true, 00:26:01.507 "reset": true, 00:26:01.507 "nvme_admin": false, 00:26:01.507 "nvme_io": false, 00:26:01.507 "nvme_io_md": false, 00:26:01.507 "write_zeroes": true, 00:26:01.507 "zcopy": true, 00:26:01.507 "get_zone_info": false, 00:26:01.507 "zone_management": false, 00:26:01.507 "zone_append": false, 00:26:01.507 "compare": false, 00:26:01.507 "compare_and_write": false, 00:26:01.507 "abort": true, 00:26:01.507 "seek_hole": false, 00:26:01.507 "seek_data": false, 00:26:01.507 "copy": true, 00:26:01.507 "nvme_iov_md": false 00:26:01.507 }, 00:26:01.507 "memory_domains": [ 00:26:01.507 { 00:26:01.507 "dma_device_id": "system", 00:26:01.507 "dma_device_type": 1 00:26:01.507 }, 00:26:01.507 { 00:26:01.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.507 "dma_device_type": 2 00:26:01.507 } 00:26:01.507 ], 00:26:01.507 "driver_specific": {} 00:26:01.507 } 00:26:01.507 ] 00:26:01.507 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:01.507 08:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:01.507 08:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:01.507 08:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:01.765 BaseBdev3 00:26:01.765 08:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:01.765 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:01.765 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:01.765 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:01.765 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:01.765 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:01.765 08:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:02.333 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:02.333 [ 00:26:02.333 { 00:26:02.333 "name": "BaseBdev3", 00:26:02.333 "aliases": [ 00:26:02.333 "e202ec1a-164f-4933-9282-1a50c59fd3be" 00:26:02.333 ], 00:26:02.333 "product_name": "Malloc disk", 00:26:02.333 "block_size": 512, 00:26:02.333 "num_blocks": 65536, 00:26:02.333 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:02.333 "assigned_rate_limits": { 00:26:02.333 "rw_ios_per_sec": 0, 00:26:02.333 "rw_mbytes_per_sec": 0, 00:26:02.333 "r_mbytes_per_sec": 0, 00:26:02.333 "w_mbytes_per_sec": 0 00:26:02.333 }, 00:26:02.333 "claimed": false, 00:26:02.333 "zoned": false, 00:26:02.333 "supported_io_types": { 00:26:02.333 "read": true, 00:26:02.333 "write": true, 00:26:02.333 "unmap": true, 00:26:02.333 "flush": true, 00:26:02.333 "reset": true, 00:26:02.333 "nvme_admin": false, 00:26:02.333 "nvme_io": false, 00:26:02.333 "nvme_io_md": false, 00:26:02.333 "write_zeroes": true, 00:26:02.333 "zcopy": true, 00:26:02.333 "get_zone_info": false, 00:26:02.333 "zone_management": false, 00:26:02.333 "zone_append": false, 00:26:02.333 "compare": false, 00:26:02.333 "compare_and_write": false, 00:26:02.333 "abort": true, 00:26:02.333 "seek_hole": false, 00:26:02.333 "seek_data": false, 00:26:02.333 "copy": true, 00:26:02.333 "nvme_iov_md": false 00:26:02.333 }, 00:26:02.333 "memory_domains": [ 00:26:02.333 { 00:26:02.333 "dma_device_id": "system", 00:26:02.333 "dma_device_type": 1 00:26:02.333 }, 00:26:02.333 { 00:26:02.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.333 "dma_device_type": 2 00:26:02.333 } 00:26:02.333 ], 00:26:02.333 "driver_specific": {} 00:26:02.333 } 00:26:02.333 ] 00:26:02.333 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:02.333 08:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:02.333 08:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:02.333 08:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:02.593 BaseBdev4 00:26:02.593 08:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:02.593 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:02.593 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:02.593 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:02.593 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:02.593 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:02.593 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:02.851 08:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:03.110 [ 00:26:03.110 { 00:26:03.110 "name": "BaseBdev4", 00:26:03.110 "aliases": [ 00:26:03.110 "cad0b737-78d1-4337-b8dd-776d81c064d2" 00:26:03.110 ], 00:26:03.110 "product_name": "Malloc disk", 00:26:03.110 "block_size": 512, 00:26:03.110 "num_blocks": 65536, 00:26:03.110 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:03.110 "assigned_rate_limits": { 00:26:03.110 "rw_ios_per_sec": 0, 00:26:03.110 "rw_mbytes_per_sec": 0, 00:26:03.110 "r_mbytes_per_sec": 0, 00:26:03.110 "w_mbytes_per_sec": 0 00:26:03.110 }, 00:26:03.110 "claimed": false, 00:26:03.110 "zoned": false, 00:26:03.110 "supported_io_types": { 00:26:03.110 "read": true, 00:26:03.110 "write": true, 00:26:03.110 "unmap": true, 00:26:03.110 "flush": true, 00:26:03.110 "reset": true, 00:26:03.110 "nvme_admin": false, 00:26:03.110 "nvme_io": false, 00:26:03.110 "nvme_io_md": false, 00:26:03.110 "write_zeroes": true, 00:26:03.110 "zcopy": true, 00:26:03.110 "get_zone_info": false, 00:26:03.110 "zone_management": false, 00:26:03.110 "zone_append": false, 00:26:03.110 "compare": false, 00:26:03.110 "compare_and_write": false, 00:26:03.110 "abort": true, 00:26:03.110 "seek_hole": false, 00:26:03.110 "seek_data": false, 00:26:03.110 "copy": true, 00:26:03.110 "nvme_iov_md": false 00:26:03.110 }, 00:26:03.110 "memory_domains": [ 00:26:03.110 { 00:26:03.110 "dma_device_id": "system", 00:26:03.110 "dma_device_type": 1 00:26:03.110 }, 00:26:03.110 { 00:26:03.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.110 "dma_device_type": 2 00:26:03.110 } 00:26:03.110 ], 00:26:03.110 "driver_specific": {} 00:26:03.110 } 00:26:03.110 ] 00:26:03.110 08:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:03.110 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:03.110 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:03.110 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:03.369 [2024-07-12 08:52:38.435828] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:03.369 [2024-07-12 08:52:38.436292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:03.369 [2024-07-12 08:52:38.436513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:03.369 [2024-07-12 08:52:38.438896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:03.369 [2024-07-12 08:52:38.439152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:03.369 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:03.369 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:03.369 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:03.369 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:03.369 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:03.369 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:03.369 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.370 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.370 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.370 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.370 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.370 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.629 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.629 "name": "Existed_Raid", 00:26:03.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.629 "strip_size_kb": 64, 00:26:03.629 "state": "configuring", 00:26:03.629 "raid_level": "concat", 00:26:03.629 "superblock": false, 00:26:03.629 "num_base_bdevs": 4, 00:26:03.629 "num_base_bdevs_discovered": 3, 00:26:03.629 "num_base_bdevs_operational": 4, 00:26:03.629 "base_bdevs_list": [ 00:26:03.629 { 00:26:03.629 "name": "BaseBdev1", 00:26:03.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.629 "is_configured": false, 00:26:03.629 "data_offset": 0, 00:26:03.629 "data_size": 0 00:26:03.629 }, 00:26:03.629 { 00:26:03.629 "name": "BaseBdev2", 00:26:03.629 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:03.629 "is_configured": true, 00:26:03.629 "data_offset": 0, 00:26:03.629 "data_size": 65536 00:26:03.629 }, 00:26:03.629 { 00:26:03.629 "name": "BaseBdev3", 00:26:03.629 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:03.629 "is_configured": true, 00:26:03.629 "data_offset": 0, 00:26:03.629 "data_size": 65536 00:26:03.629 }, 00:26:03.629 { 00:26:03.629 "name": "BaseBdev4", 00:26:03.629 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:03.629 "is_configured": true, 00:26:03.629 "data_offset": 0, 00:26:03.629 "data_size": 65536 00:26:03.629 } 00:26:03.629 ] 00:26:03.629 }' 00:26:03.629 08:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.629 08:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:04.566 [2024-07-12 08:52:39.696485] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.566 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.825 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.825 "name": "Existed_Raid", 00:26:04.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.825 "strip_size_kb": 64, 00:26:04.825 "state": "configuring", 00:26:04.825 "raid_level": "concat", 00:26:04.825 "superblock": false, 00:26:04.825 "num_base_bdevs": 4, 00:26:04.825 "num_base_bdevs_discovered": 2, 00:26:04.825 "num_base_bdevs_operational": 4, 00:26:04.825 "base_bdevs_list": [ 00:26:04.825 { 00:26:04.825 "name": "BaseBdev1", 00:26:04.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.825 "is_configured": false, 00:26:04.825 "data_offset": 0, 00:26:04.825 "data_size": 0 00:26:04.825 }, 00:26:04.825 { 00:26:04.825 "name": null, 00:26:04.825 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:04.825 "is_configured": false, 00:26:04.825 "data_offset": 0, 00:26:04.825 "data_size": 65536 00:26:04.825 }, 00:26:04.825 { 00:26:04.825 "name": "BaseBdev3", 00:26:04.825 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:04.825 "is_configured": true, 00:26:04.825 "data_offset": 0, 00:26:04.825 "data_size": 65536 00:26:04.825 }, 00:26:04.825 { 00:26:04.825 "name": "BaseBdev4", 00:26:04.825 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:04.825 "is_configured": true, 00:26:04.825 "data_offset": 0, 00:26:04.825 "data_size": 65536 00:26:04.825 } 00:26:04.825 ] 00:26:04.825 }' 00:26:04.825 08:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.825 08:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.760 08:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.760 08:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:06.018 08:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:06.018 08:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:06.276 [2024-07-12 08:52:41.251781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.276 BaseBdev1 00:26:06.276 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:06.276 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:06.276 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:06.276 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:06.276 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:06.276 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:06.276 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:06.535 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:06.794 [ 00:26:06.794 { 00:26:06.794 "name": "BaseBdev1", 00:26:06.794 "aliases": [ 00:26:06.794 "754dbded-fe14-4334-acb3-6bbd2e144089" 00:26:06.794 ], 00:26:06.794 "product_name": "Malloc disk", 00:26:06.794 "block_size": 512, 00:26:06.794 "num_blocks": 65536, 00:26:06.794 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:06.794 "assigned_rate_limits": { 00:26:06.794 "rw_ios_per_sec": 0, 00:26:06.794 "rw_mbytes_per_sec": 0, 00:26:06.794 "r_mbytes_per_sec": 0, 00:26:06.794 "w_mbytes_per_sec": 0 00:26:06.794 }, 00:26:06.794 "claimed": true, 00:26:06.794 "claim_type": "exclusive_write", 00:26:06.794 "zoned": false, 00:26:06.794 "supported_io_types": { 00:26:06.794 "read": true, 00:26:06.794 "write": true, 00:26:06.794 "unmap": true, 00:26:06.794 "flush": true, 00:26:06.794 "reset": true, 00:26:06.794 "nvme_admin": false, 00:26:06.794 "nvme_io": false, 00:26:06.794 "nvme_io_md": false, 00:26:06.794 "write_zeroes": true, 00:26:06.794 "zcopy": true, 00:26:06.794 "get_zone_info": false, 00:26:06.794 "zone_management": false, 00:26:06.794 "zone_append": false, 00:26:06.794 "compare": false, 00:26:06.794 "compare_and_write": false, 00:26:06.794 "abort": true, 00:26:06.794 "seek_hole": false, 00:26:06.794 "seek_data": false, 00:26:06.794 "copy": true, 00:26:06.794 "nvme_iov_md": false 00:26:06.794 }, 00:26:06.794 "memory_domains": [ 00:26:06.794 { 00:26:06.794 "dma_device_id": "system", 00:26:06.794 "dma_device_type": 1 00:26:06.794 }, 00:26:06.794 { 00:26:06.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.794 "dma_device_type": 2 00:26:06.794 } 00:26:06.794 ], 00:26:06.794 "driver_specific": {} 00:26:06.794 } 00:26:06.794 ] 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.794 08:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.052 08:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.052 "name": "Existed_Raid", 00:26:07.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.052 "strip_size_kb": 64, 00:26:07.052 "state": "configuring", 00:26:07.052 "raid_level": "concat", 00:26:07.052 "superblock": false, 00:26:07.052 "num_base_bdevs": 4, 00:26:07.052 "num_base_bdevs_discovered": 3, 00:26:07.052 "num_base_bdevs_operational": 4, 00:26:07.052 "base_bdevs_list": [ 00:26:07.052 { 00:26:07.052 "name": "BaseBdev1", 00:26:07.052 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:07.052 "is_configured": true, 00:26:07.052 "data_offset": 0, 00:26:07.052 "data_size": 65536 00:26:07.052 }, 00:26:07.053 { 00:26:07.053 "name": null, 00:26:07.053 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:07.053 "is_configured": false, 00:26:07.053 "data_offset": 0, 00:26:07.053 "data_size": 65536 00:26:07.053 }, 00:26:07.053 { 00:26:07.053 "name": "BaseBdev3", 00:26:07.053 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:07.053 "is_configured": true, 00:26:07.053 "data_offset": 0, 00:26:07.053 "data_size": 65536 00:26:07.053 }, 00:26:07.053 { 00:26:07.053 "name": "BaseBdev4", 00:26:07.053 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:07.053 "is_configured": true, 00:26:07.053 "data_offset": 0, 00:26:07.053 "data_size": 65536 00:26:07.053 } 00:26:07.053 ] 00:26:07.053 }' 00:26:07.053 08:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.053 08:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.620 08:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.620 08:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:07.878 08:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:07.878 08:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:08.138 [2024-07-12 08:52:43.260561] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.138 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.395 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.395 "name": "Existed_Raid", 00:26:08.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.395 "strip_size_kb": 64, 00:26:08.395 "state": "configuring", 00:26:08.395 "raid_level": "concat", 00:26:08.395 "superblock": false, 00:26:08.395 "num_base_bdevs": 4, 00:26:08.395 "num_base_bdevs_discovered": 2, 00:26:08.395 "num_base_bdevs_operational": 4, 00:26:08.395 "base_bdevs_list": [ 00:26:08.395 { 00:26:08.395 "name": "BaseBdev1", 00:26:08.395 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:08.395 "is_configured": true, 00:26:08.395 "data_offset": 0, 00:26:08.395 "data_size": 65536 00:26:08.395 }, 00:26:08.395 { 00:26:08.395 "name": null, 00:26:08.395 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:08.395 "is_configured": false, 00:26:08.395 "data_offset": 0, 00:26:08.395 "data_size": 65536 00:26:08.395 }, 00:26:08.395 { 00:26:08.395 "name": null, 00:26:08.395 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:08.395 "is_configured": false, 00:26:08.395 "data_offset": 0, 00:26:08.395 "data_size": 65536 00:26:08.395 }, 00:26:08.395 { 00:26:08.395 "name": "BaseBdev4", 00:26:08.395 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:08.395 "is_configured": true, 00:26:08.395 "data_offset": 0, 00:26:08.395 "data_size": 65536 00:26:08.395 } 00:26:08.395 ] 00:26:08.395 }' 00:26:08.395 08:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.395 08:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.329 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.329 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:09.587 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:09.587 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:09.845 [2024-07-12 08:52:44.869192] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.845 08:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.103 08:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:10.103 "name": "Existed_Raid", 00:26:10.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.103 "strip_size_kb": 64, 00:26:10.103 "state": "configuring", 00:26:10.103 "raid_level": "concat", 00:26:10.103 "superblock": false, 00:26:10.103 "num_base_bdevs": 4, 00:26:10.103 "num_base_bdevs_discovered": 3, 00:26:10.103 "num_base_bdevs_operational": 4, 00:26:10.103 "base_bdevs_list": [ 00:26:10.103 { 00:26:10.103 "name": "BaseBdev1", 00:26:10.103 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:10.103 "is_configured": true, 00:26:10.103 "data_offset": 0, 00:26:10.103 "data_size": 65536 00:26:10.103 }, 00:26:10.103 { 00:26:10.103 "name": null, 00:26:10.103 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:10.103 "is_configured": false, 00:26:10.103 "data_offset": 0, 00:26:10.103 "data_size": 65536 00:26:10.103 }, 00:26:10.103 { 00:26:10.103 "name": "BaseBdev3", 00:26:10.103 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:10.104 "is_configured": true, 00:26:10.104 "data_offset": 0, 00:26:10.104 "data_size": 65536 00:26:10.104 }, 00:26:10.104 { 00:26:10.104 "name": "BaseBdev4", 00:26:10.104 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:10.104 "is_configured": true, 00:26:10.104 "data_offset": 0, 00:26:10.104 "data_size": 65536 00:26:10.104 } 00:26:10.104 ] 00:26:10.104 }' 00:26:10.104 08:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:10.104 08:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.040 08:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.040 08:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:11.040 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:11.040 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:11.299 [2024-07-12 08:52:46.453766] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.557 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.816 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.816 "name": "Existed_Raid", 00:26:11.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.816 "strip_size_kb": 64, 00:26:11.816 "state": "configuring", 00:26:11.816 "raid_level": "concat", 00:26:11.816 "superblock": false, 00:26:11.816 "num_base_bdevs": 4, 00:26:11.816 "num_base_bdevs_discovered": 2, 00:26:11.816 "num_base_bdevs_operational": 4, 00:26:11.816 "base_bdevs_list": [ 00:26:11.816 { 00:26:11.816 "name": null, 00:26:11.816 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:11.816 "is_configured": false, 00:26:11.816 "data_offset": 0, 00:26:11.816 "data_size": 65536 00:26:11.816 }, 00:26:11.816 { 00:26:11.816 "name": null, 00:26:11.816 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:11.816 "is_configured": false, 00:26:11.816 "data_offset": 0, 00:26:11.816 "data_size": 65536 00:26:11.816 }, 00:26:11.816 { 00:26:11.816 "name": "BaseBdev3", 00:26:11.816 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:11.816 "is_configured": true, 00:26:11.816 "data_offset": 0, 00:26:11.816 "data_size": 65536 00:26:11.816 }, 00:26:11.816 { 00:26:11.816 "name": "BaseBdev4", 00:26:11.816 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:11.816 "is_configured": true, 00:26:11.816 "data_offset": 0, 00:26:11.816 "data_size": 65536 00:26:11.816 } 00:26:11.816 ] 00:26:11.816 }' 00:26:11.816 08:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.816 08:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.383 08:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.383 08:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:12.949 08:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:12.949 08:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:12.949 [2024-07-12 08:52:48.135678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.208 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.467 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.467 "name": "Existed_Raid", 00:26:13.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.467 "strip_size_kb": 64, 00:26:13.467 "state": "configuring", 00:26:13.467 "raid_level": "concat", 00:26:13.467 "superblock": false, 00:26:13.467 "num_base_bdevs": 4, 00:26:13.467 "num_base_bdevs_discovered": 3, 00:26:13.467 "num_base_bdevs_operational": 4, 00:26:13.467 "base_bdevs_list": [ 00:26:13.467 { 00:26:13.467 "name": null, 00:26:13.467 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:13.467 "is_configured": false, 00:26:13.467 "data_offset": 0, 00:26:13.467 "data_size": 65536 00:26:13.467 }, 00:26:13.467 { 00:26:13.467 "name": "BaseBdev2", 00:26:13.467 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:13.467 "is_configured": true, 00:26:13.467 "data_offset": 0, 00:26:13.467 "data_size": 65536 00:26:13.467 }, 00:26:13.467 { 00:26:13.467 "name": "BaseBdev3", 00:26:13.467 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:13.467 "is_configured": true, 00:26:13.467 "data_offset": 0, 00:26:13.467 "data_size": 65536 00:26:13.467 }, 00:26:13.467 { 00:26:13.467 "name": "BaseBdev4", 00:26:13.467 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:13.467 "is_configured": true, 00:26:13.467 "data_offset": 0, 00:26:13.467 "data_size": 65536 00:26:13.467 } 00:26:13.467 ] 00:26:13.467 }' 00:26:13.467 08:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.467 08:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.042 08:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.042 08:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:14.314 08:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:14.314 08:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.314 08:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:14.880 08:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 754dbded-fe14-4334-acb3-6bbd2e144089 00:26:15.138 [2024-07-12 08:52:50.078736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:15.138 [2024-07-12 08:52:50.079076] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:15.138 [2024-07-12 08:52:50.079126] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:15.138 [2024-07-12 08:52:50.079384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:15.138 [2024-07-12 08:52:50.079889] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:15.138 [2024-07-12 08:52:50.080015] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:26:15.138 [2024-07-12 08:52:50.080392] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.138 NewBaseBdev 00:26:15.138 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:15.138 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:15.138 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:15.138 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:15.138 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:15.138 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:15.138 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.397 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:15.656 [ 00:26:15.656 { 00:26:15.656 "name": "NewBaseBdev", 00:26:15.656 "aliases": [ 00:26:15.656 "754dbded-fe14-4334-acb3-6bbd2e144089" 00:26:15.656 ], 00:26:15.656 "product_name": "Malloc disk", 00:26:15.656 "block_size": 512, 00:26:15.656 "num_blocks": 65536, 00:26:15.656 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:15.656 "assigned_rate_limits": { 00:26:15.656 "rw_ios_per_sec": 0, 00:26:15.656 "rw_mbytes_per_sec": 0, 00:26:15.656 "r_mbytes_per_sec": 0, 00:26:15.656 "w_mbytes_per_sec": 0 00:26:15.656 }, 00:26:15.656 "claimed": true, 00:26:15.656 "claim_type": "exclusive_write", 00:26:15.656 "zoned": false, 00:26:15.656 "supported_io_types": { 00:26:15.656 "read": true, 00:26:15.656 "write": true, 00:26:15.656 "unmap": true, 00:26:15.656 "flush": true, 00:26:15.656 "reset": true, 00:26:15.656 "nvme_admin": false, 00:26:15.656 "nvme_io": false, 00:26:15.656 "nvme_io_md": false, 00:26:15.656 "write_zeroes": true, 00:26:15.656 "zcopy": true, 00:26:15.656 "get_zone_info": false, 00:26:15.656 "zone_management": false, 00:26:15.656 "zone_append": false, 00:26:15.656 "compare": false, 00:26:15.656 "compare_and_write": false, 00:26:15.656 "abort": true, 00:26:15.656 "seek_hole": false, 00:26:15.656 "seek_data": false, 00:26:15.656 "copy": true, 00:26:15.656 "nvme_iov_md": false 00:26:15.656 }, 00:26:15.656 "memory_domains": [ 00:26:15.656 { 00:26:15.656 "dma_device_id": "system", 00:26:15.656 "dma_device_type": 1 00:26:15.656 }, 00:26:15.656 { 00:26:15.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.656 "dma_device_type": 2 00:26:15.656 } 00:26:15.656 ], 00:26:15.656 "driver_specific": {} 00:26:15.656 } 00:26:15.656 ] 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.656 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.915 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.915 "name": "Existed_Raid", 00:26:15.915 "uuid": "44fb89f9-4aa6-4529-8320-f7c9f8fd243c", 00:26:15.915 "strip_size_kb": 64, 00:26:15.915 "state": "online", 00:26:15.915 "raid_level": "concat", 00:26:15.915 "superblock": false, 00:26:15.915 "num_base_bdevs": 4, 00:26:15.915 "num_base_bdevs_discovered": 4, 00:26:15.915 "num_base_bdevs_operational": 4, 00:26:15.915 "base_bdevs_list": [ 00:26:15.915 { 00:26:15.915 "name": "NewBaseBdev", 00:26:15.915 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:15.915 "is_configured": true, 00:26:15.915 "data_offset": 0, 00:26:15.915 "data_size": 65536 00:26:15.915 }, 00:26:15.915 { 00:26:15.915 "name": "BaseBdev2", 00:26:15.915 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:15.915 "is_configured": true, 00:26:15.915 "data_offset": 0, 00:26:15.915 "data_size": 65536 00:26:15.915 }, 00:26:15.915 { 00:26:15.915 "name": "BaseBdev3", 00:26:15.915 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:15.915 "is_configured": true, 00:26:15.915 "data_offset": 0, 00:26:15.915 "data_size": 65536 00:26:15.915 }, 00:26:15.915 { 00:26:15.915 "name": "BaseBdev4", 00:26:15.915 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:15.915 "is_configured": true, 00:26:15.915 "data_offset": 0, 00:26:15.915 "data_size": 65536 00:26:15.915 } 00:26:15.915 ] 00:26:15.915 }' 00:26:15.915 08:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.915 08:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:16.851 [2024-07-12 08:52:51.975856] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:16.851 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:16.851 "name": "Existed_Raid", 00:26:16.851 "aliases": [ 00:26:16.851 "44fb89f9-4aa6-4529-8320-f7c9f8fd243c" 00:26:16.851 ], 00:26:16.851 "product_name": "Raid Volume", 00:26:16.851 "block_size": 512, 00:26:16.851 "num_blocks": 262144, 00:26:16.851 "uuid": "44fb89f9-4aa6-4529-8320-f7c9f8fd243c", 00:26:16.851 "assigned_rate_limits": { 00:26:16.851 "rw_ios_per_sec": 0, 00:26:16.851 "rw_mbytes_per_sec": 0, 00:26:16.851 "r_mbytes_per_sec": 0, 00:26:16.851 "w_mbytes_per_sec": 0 00:26:16.851 }, 00:26:16.851 "claimed": false, 00:26:16.851 "zoned": false, 00:26:16.851 "supported_io_types": { 00:26:16.851 "read": true, 00:26:16.851 "write": true, 00:26:16.851 "unmap": true, 00:26:16.851 "flush": true, 00:26:16.851 "reset": true, 00:26:16.851 "nvme_admin": false, 00:26:16.851 "nvme_io": false, 00:26:16.851 "nvme_io_md": false, 00:26:16.851 "write_zeroes": true, 00:26:16.851 "zcopy": false, 00:26:16.851 "get_zone_info": false, 00:26:16.851 "zone_management": false, 00:26:16.851 "zone_append": false, 00:26:16.851 "compare": false, 00:26:16.851 "compare_and_write": false, 00:26:16.851 "abort": false, 00:26:16.851 "seek_hole": false, 00:26:16.851 "seek_data": false, 00:26:16.851 "copy": false, 00:26:16.851 "nvme_iov_md": false 00:26:16.851 }, 00:26:16.851 "memory_domains": [ 00:26:16.851 { 00:26:16.851 "dma_device_id": "system", 00:26:16.851 "dma_device_type": 1 00:26:16.851 }, 00:26:16.851 { 00:26:16.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.851 "dma_device_type": 2 00:26:16.851 }, 00:26:16.851 { 00:26:16.851 "dma_device_id": "system", 00:26:16.851 "dma_device_type": 1 00:26:16.851 }, 00:26:16.851 { 00:26:16.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.851 "dma_device_type": 2 00:26:16.851 }, 00:26:16.851 { 00:26:16.851 "dma_device_id": "system", 00:26:16.851 "dma_device_type": 1 00:26:16.851 }, 00:26:16.851 { 00:26:16.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.851 "dma_device_type": 2 00:26:16.851 }, 00:26:16.851 { 00:26:16.851 "dma_device_id": "system", 00:26:16.851 "dma_device_type": 1 00:26:16.851 }, 00:26:16.851 { 00:26:16.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.851 "dma_device_type": 2 00:26:16.851 } 00:26:16.851 ], 00:26:16.851 "driver_specific": { 00:26:16.851 "raid": { 00:26:16.851 "uuid": "44fb89f9-4aa6-4529-8320-f7c9f8fd243c", 00:26:16.851 "strip_size_kb": 64, 00:26:16.851 "state": "online", 00:26:16.851 "raid_level": "concat", 00:26:16.851 "superblock": false, 00:26:16.851 "num_base_bdevs": 4, 00:26:16.851 "num_base_bdevs_discovered": 4, 00:26:16.852 "num_base_bdevs_operational": 4, 00:26:16.852 "base_bdevs_list": [ 00:26:16.852 { 00:26:16.852 "name": "NewBaseBdev", 00:26:16.852 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:16.852 "is_configured": true, 00:26:16.852 "data_offset": 0, 00:26:16.852 "data_size": 65536 00:26:16.852 }, 00:26:16.852 { 00:26:16.852 "name": "BaseBdev2", 00:26:16.852 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:16.852 "is_configured": true, 00:26:16.852 "data_offset": 0, 00:26:16.852 "data_size": 65536 00:26:16.852 }, 00:26:16.852 { 00:26:16.852 "name": "BaseBdev3", 00:26:16.852 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:16.852 "is_configured": true, 00:26:16.852 "data_offset": 0, 00:26:16.852 "data_size": 65536 00:26:16.852 }, 00:26:16.852 { 00:26:16.852 "name": "BaseBdev4", 00:26:16.852 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:16.852 "is_configured": true, 00:26:16.852 "data_offset": 0, 00:26:16.852 "data_size": 65536 00:26:16.852 } 00:26:16.852 ] 00:26:16.852 } 00:26:16.852 } 00:26:16.852 }' 00:26:16.852 08:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.111 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:17.111 BaseBdev2 00:26:17.111 BaseBdev3 00:26:17.111 BaseBdev4' 00:26:17.111 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.111 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:17.111 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.370 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.370 "name": "NewBaseBdev", 00:26:17.370 "aliases": [ 00:26:17.370 "754dbded-fe14-4334-acb3-6bbd2e144089" 00:26:17.370 ], 00:26:17.370 "product_name": "Malloc disk", 00:26:17.370 "block_size": 512, 00:26:17.370 "num_blocks": 65536, 00:26:17.370 "uuid": "754dbded-fe14-4334-acb3-6bbd2e144089", 00:26:17.370 "assigned_rate_limits": { 00:26:17.370 "rw_ios_per_sec": 0, 00:26:17.370 "rw_mbytes_per_sec": 0, 00:26:17.370 "r_mbytes_per_sec": 0, 00:26:17.370 "w_mbytes_per_sec": 0 00:26:17.370 }, 00:26:17.370 "claimed": true, 00:26:17.370 "claim_type": "exclusive_write", 00:26:17.370 "zoned": false, 00:26:17.370 "supported_io_types": { 00:26:17.370 "read": true, 00:26:17.370 "write": true, 00:26:17.370 "unmap": true, 00:26:17.370 "flush": true, 00:26:17.370 "reset": true, 00:26:17.370 "nvme_admin": false, 00:26:17.370 "nvme_io": false, 00:26:17.370 "nvme_io_md": false, 00:26:17.370 "write_zeroes": true, 00:26:17.370 "zcopy": true, 00:26:17.370 "get_zone_info": false, 00:26:17.370 "zone_management": false, 00:26:17.370 "zone_append": false, 00:26:17.370 "compare": false, 00:26:17.370 "compare_and_write": false, 00:26:17.370 "abort": true, 00:26:17.370 "seek_hole": false, 00:26:17.370 "seek_data": false, 00:26:17.370 "copy": true, 00:26:17.370 "nvme_iov_md": false 00:26:17.370 }, 00:26:17.370 "memory_domains": [ 00:26:17.370 { 00:26:17.370 "dma_device_id": "system", 00:26:17.370 "dma_device_type": 1 00:26:17.370 }, 00:26:17.370 { 00:26:17.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.370 "dma_device_type": 2 00:26:17.370 } 00:26:17.370 ], 00:26:17.370 "driver_specific": {} 00:26:17.370 }' 00:26:17.370 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.370 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.370 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.370 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.370 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.629 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.629 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.629 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.629 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.629 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.629 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.887 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.887 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.887 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:17.887 08:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.145 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.145 "name": "BaseBdev2", 00:26:18.145 "aliases": [ 00:26:18.145 "9df1c151-a8d0-4442-80a9-91551bfe9923" 00:26:18.145 ], 00:26:18.145 "product_name": "Malloc disk", 00:26:18.145 "block_size": 512, 00:26:18.145 "num_blocks": 65536, 00:26:18.145 "uuid": "9df1c151-a8d0-4442-80a9-91551bfe9923", 00:26:18.145 "assigned_rate_limits": { 00:26:18.145 "rw_ios_per_sec": 0, 00:26:18.145 "rw_mbytes_per_sec": 0, 00:26:18.145 "r_mbytes_per_sec": 0, 00:26:18.145 "w_mbytes_per_sec": 0 00:26:18.145 }, 00:26:18.145 "claimed": true, 00:26:18.145 "claim_type": "exclusive_write", 00:26:18.145 "zoned": false, 00:26:18.145 "supported_io_types": { 00:26:18.145 "read": true, 00:26:18.145 "write": true, 00:26:18.145 "unmap": true, 00:26:18.145 "flush": true, 00:26:18.145 "reset": true, 00:26:18.145 "nvme_admin": false, 00:26:18.145 "nvme_io": false, 00:26:18.145 "nvme_io_md": false, 00:26:18.145 "write_zeroes": true, 00:26:18.145 "zcopy": true, 00:26:18.145 "get_zone_info": false, 00:26:18.145 "zone_management": false, 00:26:18.145 "zone_append": false, 00:26:18.145 "compare": false, 00:26:18.145 "compare_and_write": false, 00:26:18.145 "abort": true, 00:26:18.145 "seek_hole": false, 00:26:18.145 "seek_data": false, 00:26:18.145 "copy": true, 00:26:18.145 "nvme_iov_md": false 00:26:18.145 }, 00:26:18.145 "memory_domains": [ 00:26:18.145 { 00:26:18.145 "dma_device_id": "system", 00:26:18.145 "dma_device_type": 1 00:26:18.145 }, 00:26:18.145 { 00:26:18.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.145 "dma_device_type": 2 00:26:18.145 } 00:26:18.145 ], 00:26:18.145 "driver_specific": {} 00:26:18.145 }' 00:26:18.145 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.145 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.145 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.145 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.145 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.402 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.402 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.402 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.402 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.402 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.402 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.659 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.659 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.659 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:18.659 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.917 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.917 "name": "BaseBdev3", 00:26:18.917 "aliases": [ 00:26:18.917 "e202ec1a-164f-4933-9282-1a50c59fd3be" 00:26:18.917 ], 00:26:18.917 "product_name": "Malloc disk", 00:26:18.917 "block_size": 512, 00:26:18.917 "num_blocks": 65536, 00:26:18.917 "uuid": "e202ec1a-164f-4933-9282-1a50c59fd3be", 00:26:18.917 "assigned_rate_limits": { 00:26:18.917 "rw_ios_per_sec": 0, 00:26:18.917 "rw_mbytes_per_sec": 0, 00:26:18.917 "r_mbytes_per_sec": 0, 00:26:18.917 "w_mbytes_per_sec": 0 00:26:18.917 }, 00:26:18.917 "claimed": true, 00:26:18.917 "claim_type": "exclusive_write", 00:26:18.917 "zoned": false, 00:26:18.917 "supported_io_types": { 00:26:18.917 "read": true, 00:26:18.917 "write": true, 00:26:18.917 "unmap": true, 00:26:18.917 "flush": true, 00:26:18.917 "reset": true, 00:26:18.917 "nvme_admin": false, 00:26:18.917 "nvme_io": false, 00:26:18.917 "nvme_io_md": false, 00:26:18.917 "write_zeroes": true, 00:26:18.917 "zcopy": true, 00:26:18.917 "get_zone_info": false, 00:26:18.917 "zone_management": false, 00:26:18.917 "zone_append": false, 00:26:18.917 "compare": false, 00:26:18.917 "compare_and_write": false, 00:26:18.917 "abort": true, 00:26:18.917 "seek_hole": false, 00:26:18.917 "seek_data": false, 00:26:18.917 "copy": true, 00:26:18.917 "nvme_iov_md": false 00:26:18.917 }, 00:26:18.917 "memory_domains": [ 00:26:18.917 { 00:26:18.917 "dma_device_id": "system", 00:26:18.917 "dma_device_type": 1 00:26:18.917 }, 00:26:18.917 { 00:26:18.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.917 "dma_device_type": 2 00:26:18.917 } 00:26:18.917 ], 00:26:18.917 "driver_specific": {} 00:26:18.917 }' 00:26:18.917 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.917 08:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.917 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.917 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.917 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.175 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:19.175 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.175 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.175 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:19.175 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.175 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.434 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:19.434 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:19.434 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:19.434 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:19.693 "name": "BaseBdev4", 00:26:19.693 "aliases": [ 00:26:19.693 "cad0b737-78d1-4337-b8dd-776d81c064d2" 00:26:19.693 ], 00:26:19.693 "product_name": "Malloc disk", 00:26:19.693 "block_size": 512, 00:26:19.693 "num_blocks": 65536, 00:26:19.693 "uuid": "cad0b737-78d1-4337-b8dd-776d81c064d2", 00:26:19.693 "assigned_rate_limits": { 00:26:19.693 "rw_ios_per_sec": 0, 00:26:19.693 "rw_mbytes_per_sec": 0, 00:26:19.693 "r_mbytes_per_sec": 0, 00:26:19.693 "w_mbytes_per_sec": 0 00:26:19.693 }, 00:26:19.693 "claimed": true, 00:26:19.693 "claim_type": "exclusive_write", 00:26:19.693 "zoned": false, 00:26:19.693 "supported_io_types": { 00:26:19.693 "read": true, 00:26:19.693 "write": true, 00:26:19.693 "unmap": true, 00:26:19.693 "flush": true, 00:26:19.693 "reset": true, 00:26:19.693 "nvme_admin": false, 00:26:19.693 "nvme_io": false, 00:26:19.693 "nvme_io_md": false, 00:26:19.693 "write_zeroes": true, 00:26:19.693 "zcopy": true, 00:26:19.693 "get_zone_info": false, 00:26:19.693 "zone_management": false, 00:26:19.693 "zone_append": false, 00:26:19.693 "compare": false, 00:26:19.693 "compare_and_write": false, 00:26:19.693 "abort": true, 00:26:19.693 "seek_hole": false, 00:26:19.693 "seek_data": false, 00:26:19.693 "copy": true, 00:26:19.693 "nvme_iov_md": false 00:26:19.693 }, 00:26:19.693 "memory_domains": [ 00:26:19.693 { 00:26:19.693 "dma_device_id": "system", 00:26:19.693 "dma_device_type": 1 00:26:19.693 }, 00:26:19.693 { 00:26:19.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.693 "dma_device_type": 2 00:26:19.693 } 00:26:19.693 ], 00:26:19.693 "driver_specific": {} 00:26:19.693 }' 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:19.693 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.952 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.952 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:19.952 08:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.952 08:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.952 08:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:19.952 08:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:20.210 [2024-07-12 08:52:55.268405] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:20.210 [2024-07-12 08:52:55.268464] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:20.210 [2024-07-12 08:52:55.268546] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:20.210 [2024-07-12 08:52:55.268620] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:20.210 [2024-07-12 08:52:55.268631] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:26:20.210 08:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 139006 00:26:20.210 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 139006 ']' 00:26:20.210 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 139006 00:26:20.210 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:26:20.211 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.211 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 139006 00:26:20.211 killing process with pid 139006 00:26:20.211 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.211 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.211 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 139006' 00:26:20.211 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 139006 00:26:20.211 08:52:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 139006 00:26:20.211 [2024-07-12 08:52:55.306598] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:20.469 [2024-07-12 08:52:55.598254] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:21.846 ************************************ 00:26:21.846 END TEST raid_state_function_test 00:26:21.846 ************************************ 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:21.846 00:26:21.846 real 0m37.354s 00:26:21.846 user 1m10.283s 00:26:21.846 sys 0m4.016s 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.846 08:52:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:21.846 08:52:56 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:26:21.846 08:52:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:21.846 08:52:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.846 08:52:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:21.846 ************************************ 00:26:21.846 START TEST raid_state_function_test_sb 00:26:21.846 ************************************ 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=140206 00:26:21.846 Process raid pid: 140206 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 140206' 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 140206 /var/tmp/spdk-raid.sock 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 140206 ']' 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:21.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.846 08:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.846 [2024-07-12 08:52:56.847364] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:26:21.846 [2024-07-12 08:52:56.847575] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.846 [2024-07-12 08:52:57.018064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.105 [2024-07-12 08:52:57.261168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.363 [2024-07-12 08:52:57.457636] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:22.622 08:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.622 08:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:26:22.622 08:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:22.881 [2024-07-12 08:52:57.992129] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:22.881 [2024-07-12 08:52:57.992275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:22.881 [2024-07-12 08:52:57.992293] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:22.881 [2024-07-12 08:52:57.992337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:22.881 [2024-07-12 08:52:57.992348] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:22.881 [2024-07-12 08:52:57.992366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:22.881 [2024-07-12 08:52:57.992374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:22.881 [2024-07-12 08:52:57.992398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.881 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.140 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.140 "name": "Existed_Raid", 00:26:23.140 "uuid": "d93c9a64-86a4-4731-b299-4a3906d90b06", 00:26:23.140 "strip_size_kb": 64, 00:26:23.140 "state": "configuring", 00:26:23.140 "raid_level": "concat", 00:26:23.140 "superblock": true, 00:26:23.140 "num_base_bdevs": 4, 00:26:23.140 "num_base_bdevs_discovered": 0, 00:26:23.140 "num_base_bdevs_operational": 4, 00:26:23.140 "base_bdevs_list": [ 00:26:23.140 { 00:26:23.140 "name": "BaseBdev1", 00:26:23.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.140 "is_configured": false, 00:26:23.140 "data_offset": 0, 00:26:23.140 "data_size": 0 00:26:23.140 }, 00:26:23.140 { 00:26:23.140 "name": "BaseBdev2", 00:26:23.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.140 "is_configured": false, 00:26:23.140 "data_offset": 0, 00:26:23.140 "data_size": 0 00:26:23.140 }, 00:26:23.140 { 00:26:23.140 "name": "BaseBdev3", 00:26:23.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.140 "is_configured": false, 00:26:23.140 "data_offset": 0, 00:26:23.140 "data_size": 0 00:26:23.140 }, 00:26:23.140 { 00:26:23.140 "name": "BaseBdev4", 00:26:23.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.140 "is_configured": false, 00:26:23.140 "data_offset": 0, 00:26:23.140 "data_size": 0 00:26:23.140 } 00:26:23.140 ] 00:26:23.140 }' 00:26:23.140 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.140 08:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.076 08:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:24.076 [2024-07-12 08:52:59.108133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:24.076 [2024-07-12 08:52:59.108172] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:24.076 08:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:24.335 [2024-07-12 08:52:59.364274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:24.335 [2024-07-12 08:52:59.364365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:24.335 [2024-07-12 08:52:59.364379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:24.335 [2024-07-12 08:52:59.364432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:24.335 [2024-07-12 08:52:59.364442] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:24.335 [2024-07-12 08:52:59.364476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:24.335 [2024-07-12 08:52:59.364484] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:24.335 [2024-07-12 08:52:59.364521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:24.335 08:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:24.594 [2024-07-12 08:52:59.622426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.594 BaseBdev1 00:26:24.594 08:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:24.594 08:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:24.594 08:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:24.594 08:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:24.594 08:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:24.594 08:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:24.594 08:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:24.860 08:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:25.127 [ 00:26:25.127 { 00:26:25.127 "name": "BaseBdev1", 00:26:25.127 "aliases": [ 00:26:25.127 "f25b0b2b-9148-430c-a1ee-63e7c3916665" 00:26:25.127 ], 00:26:25.127 "product_name": "Malloc disk", 00:26:25.127 "block_size": 512, 00:26:25.127 "num_blocks": 65536, 00:26:25.127 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:25.127 "assigned_rate_limits": { 00:26:25.127 "rw_ios_per_sec": 0, 00:26:25.127 "rw_mbytes_per_sec": 0, 00:26:25.127 "r_mbytes_per_sec": 0, 00:26:25.127 "w_mbytes_per_sec": 0 00:26:25.127 }, 00:26:25.127 "claimed": true, 00:26:25.127 "claim_type": "exclusive_write", 00:26:25.127 "zoned": false, 00:26:25.127 "supported_io_types": { 00:26:25.127 "read": true, 00:26:25.127 "write": true, 00:26:25.127 "unmap": true, 00:26:25.127 "flush": true, 00:26:25.127 "reset": true, 00:26:25.127 "nvme_admin": false, 00:26:25.127 "nvme_io": false, 00:26:25.127 "nvme_io_md": false, 00:26:25.127 "write_zeroes": true, 00:26:25.127 "zcopy": true, 00:26:25.127 "get_zone_info": false, 00:26:25.127 "zone_management": false, 00:26:25.127 "zone_append": false, 00:26:25.127 "compare": false, 00:26:25.127 "compare_and_write": false, 00:26:25.127 "abort": true, 00:26:25.127 "seek_hole": false, 00:26:25.127 "seek_data": false, 00:26:25.127 "copy": true, 00:26:25.127 "nvme_iov_md": false 00:26:25.127 }, 00:26:25.127 "memory_domains": [ 00:26:25.127 { 00:26:25.127 "dma_device_id": "system", 00:26:25.127 "dma_device_type": 1 00:26:25.127 }, 00:26:25.127 { 00:26:25.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.127 "dma_device_type": 2 00:26:25.127 } 00:26:25.127 ], 00:26:25.127 "driver_specific": {} 00:26:25.127 } 00:26:25.127 ] 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.127 "name": "Existed_Raid", 00:26:25.127 "uuid": "c3d641b2-4da9-4cc0-8e8d-e00881db4546", 00:26:25.127 "strip_size_kb": 64, 00:26:25.127 "state": "configuring", 00:26:25.127 "raid_level": "concat", 00:26:25.127 "superblock": true, 00:26:25.127 "num_base_bdevs": 4, 00:26:25.127 "num_base_bdevs_discovered": 1, 00:26:25.127 "num_base_bdevs_operational": 4, 00:26:25.127 "base_bdevs_list": [ 00:26:25.127 { 00:26:25.127 "name": "BaseBdev1", 00:26:25.127 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:25.127 "is_configured": true, 00:26:25.127 "data_offset": 2048, 00:26:25.127 "data_size": 63488 00:26:25.127 }, 00:26:25.127 { 00:26:25.127 "name": "BaseBdev2", 00:26:25.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.127 "is_configured": false, 00:26:25.127 "data_offset": 0, 00:26:25.127 "data_size": 0 00:26:25.127 }, 00:26:25.127 { 00:26:25.127 "name": "BaseBdev3", 00:26:25.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.127 "is_configured": false, 00:26:25.127 "data_offset": 0, 00:26:25.127 "data_size": 0 00:26:25.127 }, 00:26:25.127 { 00:26:25.127 "name": "BaseBdev4", 00:26:25.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.127 "is_configured": false, 00:26:25.127 "data_offset": 0, 00:26:25.127 "data_size": 0 00:26:25.127 } 00:26:25.127 ] 00:26:25.127 }' 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.127 08:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.063 08:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:26.322 [2024-07-12 08:53:01.258920] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:26.322 [2024-07-12 08:53:01.259009] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:26.322 [2024-07-12 08:53:01.487034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:26.322 [2024-07-12 08:53:01.489306] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:26.322 [2024-07-12 08:53:01.489380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:26.322 [2024-07-12 08:53:01.489394] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:26.322 [2024-07-12 08:53:01.489420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:26.322 [2024-07-12 08:53:01.489430] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:26.322 [2024-07-12 08:53:01.489457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.322 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.581 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:26.581 "name": "Existed_Raid", 00:26:26.581 "uuid": "3a864260-6769-48df-baa2-c38e75a1b4c1", 00:26:26.581 "strip_size_kb": 64, 00:26:26.581 "state": "configuring", 00:26:26.581 "raid_level": "concat", 00:26:26.581 "superblock": true, 00:26:26.581 "num_base_bdevs": 4, 00:26:26.581 "num_base_bdevs_discovered": 1, 00:26:26.581 "num_base_bdevs_operational": 4, 00:26:26.581 "base_bdevs_list": [ 00:26:26.581 { 00:26:26.581 "name": "BaseBdev1", 00:26:26.581 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:26.581 "is_configured": true, 00:26:26.581 "data_offset": 2048, 00:26:26.581 "data_size": 63488 00:26:26.581 }, 00:26:26.581 { 00:26:26.581 "name": "BaseBdev2", 00:26:26.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.581 "is_configured": false, 00:26:26.581 "data_offset": 0, 00:26:26.581 "data_size": 0 00:26:26.581 }, 00:26:26.581 { 00:26:26.581 "name": "BaseBdev3", 00:26:26.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.581 "is_configured": false, 00:26:26.581 "data_offset": 0, 00:26:26.581 "data_size": 0 00:26:26.581 }, 00:26:26.581 { 00:26:26.581 "name": "BaseBdev4", 00:26:26.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.581 "is_configured": false, 00:26:26.581 "data_offset": 0, 00:26:26.581 "data_size": 0 00:26:26.581 } 00:26:26.581 ] 00:26:26.581 }' 00:26:26.581 08:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:26.581 08:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:27.518 [2024-07-12 08:53:02.667461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:27.518 BaseBdev2 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:27.518 08:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.777 08:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:28.035 [ 00:26:28.035 { 00:26:28.035 "name": "BaseBdev2", 00:26:28.035 "aliases": [ 00:26:28.035 "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9" 00:26:28.035 ], 00:26:28.035 "product_name": "Malloc disk", 00:26:28.035 "block_size": 512, 00:26:28.035 "num_blocks": 65536, 00:26:28.035 "uuid": "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9", 00:26:28.035 "assigned_rate_limits": { 00:26:28.035 "rw_ios_per_sec": 0, 00:26:28.035 "rw_mbytes_per_sec": 0, 00:26:28.035 "r_mbytes_per_sec": 0, 00:26:28.035 "w_mbytes_per_sec": 0 00:26:28.035 }, 00:26:28.035 "claimed": true, 00:26:28.035 "claim_type": "exclusive_write", 00:26:28.035 "zoned": false, 00:26:28.035 "supported_io_types": { 00:26:28.035 "read": true, 00:26:28.035 "write": true, 00:26:28.035 "unmap": true, 00:26:28.035 "flush": true, 00:26:28.035 "reset": true, 00:26:28.035 "nvme_admin": false, 00:26:28.035 "nvme_io": false, 00:26:28.035 "nvme_io_md": false, 00:26:28.035 "write_zeroes": true, 00:26:28.035 "zcopy": true, 00:26:28.035 "get_zone_info": false, 00:26:28.035 "zone_management": false, 00:26:28.035 "zone_append": false, 00:26:28.035 "compare": false, 00:26:28.035 "compare_and_write": false, 00:26:28.035 "abort": true, 00:26:28.035 "seek_hole": false, 00:26:28.035 "seek_data": false, 00:26:28.035 "copy": true, 00:26:28.035 "nvme_iov_md": false 00:26:28.035 }, 00:26:28.035 "memory_domains": [ 00:26:28.035 { 00:26:28.035 "dma_device_id": "system", 00:26:28.035 "dma_device_type": 1 00:26:28.035 }, 00:26:28.035 { 00:26:28.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.035 "dma_device_type": 2 00:26:28.035 } 00:26:28.035 ], 00:26:28.035 "driver_specific": {} 00:26:28.035 } 00:26:28.035 ] 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.035 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.292 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.292 "name": "Existed_Raid", 00:26:28.292 "uuid": "3a864260-6769-48df-baa2-c38e75a1b4c1", 00:26:28.292 "strip_size_kb": 64, 00:26:28.292 "state": "configuring", 00:26:28.292 "raid_level": "concat", 00:26:28.292 "superblock": true, 00:26:28.292 "num_base_bdevs": 4, 00:26:28.292 "num_base_bdevs_discovered": 2, 00:26:28.292 "num_base_bdevs_operational": 4, 00:26:28.292 "base_bdevs_list": [ 00:26:28.292 { 00:26:28.292 "name": "BaseBdev1", 00:26:28.292 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:28.292 "is_configured": true, 00:26:28.292 "data_offset": 2048, 00:26:28.292 "data_size": 63488 00:26:28.292 }, 00:26:28.292 { 00:26:28.292 "name": "BaseBdev2", 00:26:28.292 "uuid": "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9", 00:26:28.292 "is_configured": true, 00:26:28.292 "data_offset": 2048, 00:26:28.292 "data_size": 63488 00:26:28.292 }, 00:26:28.292 { 00:26:28.292 "name": "BaseBdev3", 00:26:28.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.292 "is_configured": false, 00:26:28.292 "data_offset": 0, 00:26:28.292 "data_size": 0 00:26:28.292 }, 00:26:28.292 { 00:26:28.292 "name": "BaseBdev4", 00:26:28.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.292 "is_configured": false, 00:26:28.292 "data_offset": 0, 00:26:28.292 "data_size": 0 00:26:28.292 } 00:26:28.292 ] 00:26:28.292 }' 00:26:28.292 08:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.292 08:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:29.225 [2024-07-12 08:53:04.403015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:29.225 BaseBdev3 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:29.225 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:29.483 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:29.740 [ 00:26:29.740 { 00:26:29.740 "name": "BaseBdev3", 00:26:29.740 "aliases": [ 00:26:29.740 "98494941-7933-470d-9535-f4b1cc2c3b8b" 00:26:29.740 ], 00:26:29.740 "product_name": "Malloc disk", 00:26:29.740 "block_size": 512, 00:26:29.740 "num_blocks": 65536, 00:26:29.740 "uuid": "98494941-7933-470d-9535-f4b1cc2c3b8b", 00:26:29.740 "assigned_rate_limits": { 00:26:29.740 "rw_ios_per_sec": 0, 00:26:29.740 "rw_mbytes_per_sec": 0, 00:26:29.740 "r_mbytes_per_sec": 0, 00:26:29.740 "w_mbytes_per_sec": 0 00:26:29.740 }, 00:26:29.740 "claimed": true, 00:26:29.740 "claim_type": "exclusive_write", 00:26:29.740 "zoned": false, 00:26:29.740 "supported_io_types": { 00:26:29.740 "read": true, 00:26:29.740 "write": true, 00:26:29.740 "unmap": true, 00:26:29.740 "flush": true, 00:26:29.740 "reset": true, 00:26:29.740 "nvme_admin": false, 00:26:29.740 "nvme_io": false, 00:26:29.740 "nvme_io_md": false, 00:26:29.740 "write_zeroes": true, 00:26:29.740 "zcopy": true, 00:26:29.740 "get_zone_info": false, 00:26:29.740 "zone_management": false, 00:26:29.740 "zone_append": false, 00:26:29.740 "compare": false, 00:26:29.740 "compare_and_write": false, 00:26:29.740 "abort": true, 00:26:29.740 "seek_hole": false, 00:26:29.740 "seek_data": false, 00:26:29.740 "copy": true, 00:26:29.740 "nvme_iov_md": false 00:26:29.740 }, 00:26:29.740 "memory_domains": [ 00:26:29.740 { 00:26:29.740 "dma_device_id": "system", 00:26:29.740 "dma_device_type": 1 00:26:29.740 }, 00:26:29.740 { 00:26:29.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.740 "dma_device_type": 2 00:26:29.740 } 00:26:29.740 ], 00:26:29.740 "driver_specific": {} 00:26:29.740 } 00:26:29.740 ] 00:26:29.740 08:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.741 08:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.999 08:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.999 "name": "Existed_Raid", 00:26:29.999 "uuid": "3a864260-6769-48df-baa2-c38e75a1b4c1", 00:26:29.999 "strip_size_kb": 64, 00:26:29.999 "state": "configuring", 00:26:29.999 "raid_level": "concat", 00:26:29.999 "superblock": true, 00:26:29.999 "num_base_bdevs": 4, 00:26:29.999 "num_base_bdevs_discovered": 3, 00:26:29.999 "num_base_bdevs_operational": 4, 00:26:29.999 "base_bdevs_list": [ 00:26:29.999 { 00:26:29.999 "name": "BaseBdev1", 00:26:29.999 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:29.999 "is_configured": true, 00:26:29.999 "data_offset": 2048, 00:26:29.999 "data_size": 63488 00:26:29.999 }, 00:26:29.999 { 00:26:29.999 "name": "BaseBdev2", 00:26:29.999 "uuid": "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9", 00:26:29.999 "is_configured": true, 00:26:29.999 "data_offset": 2048, 00:26:29.999 "data_size": 63488 00:26:29.999 }, 00:26:29.999 { 00:26:29.999 "name": "BaseBdev3", 00:26:29.999 "uuid": "98494941-7933-470d-9535-f4b1cc2c3b8b", 00:26:29.999 "is_configured": true, 00:26:29.999 "data_offset": 2048, 00:26:29.999 "data_size": 63488 00:26:29.999 }, 00:26:29.999 { 00:26:29.999 "name": "BaseBdev4", 00:26:29.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.999 "is_configured": false, 00:26:29.999 "data_offset": 0, 00:26:29.999 "data_size": 0 00:26:29.999 } 00:26:29.999 ] 00:26:29.999 }' 00:26:29.999 08:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.999 08:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.933 08:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:30.933 [2024-07-12 08:53:06.094432] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:30.933 [2024-07-12 08:53:06.094749] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:26:30.933 [2024-07-12 08:53:06.094782] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:30.933 [2024-07-12 08:53:06.094920] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:30.933 BaseBdev4 00:26:30.933 [2024-07-12 08:53:06.095301] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:26:30.933 [2024-07-12 08:53:06.095327] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:26:30.933 [2024-07-12 08:53:06.095472] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.933 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:30.933 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:30.933 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:30.933 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:30.933 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:30.933 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:30.933 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:31.192 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:31.451 [ 00:26:31.451 { 00:26:31.451 "name": "BaseBdev4", 00:26:31.451 "aliases": [ 00:26:31.451 "7fae87f7-2b3e-4c3c-9b2c-ad5fa920adfa" 00:26:31.451 ], 00:26:31.451 "product_name": "Malloc disk", 00:26:31.451 "block_size": 512, 00:26:31.451 "num_blocks": 65536, 00:26:31.451 "uuid": "7fae87f7-2b3e-4c3c-9b2c-ad5fa920adfa", 00:26:31.451 "assigned_rate_limits": { 00:26:31.451 "rw_ios_per_sec": 0, 00:26:31.451 "rw_mbytes_per_sec": 0, 00:26:31.451 "r_mbytes_per_sec": 0, 00:26:31.451 "w_mbytes_per_sec": 0 00:26:31.451 }, 00:26:31.451 "claimed": true, 00:26:31.451 "claim_type": "exclusive_write", 00:26:31.451 "zoned": false, 00:26:31.451 "supported_io_types": { 00:26:31.451 "read": true, 00:26:31.451 "write": true, 00:26:31.451 "unmap": true, 00:26:31.451 "flush": true, 00:26:31.451 "reset": true, 00:26:31.451 "nvme_admin": false, 00:26:31.451 "nvme_io": false, 00:26:31.451 "nvme_io_md": false, 00:26:31.451 "write_zeroes": true, 00:26:31.451 "zcopy": true, 00:26:31.451 "get_zone_info": false, 00:26:31.451 "zone_management": false, 00:26:31.451 "zone_append": false, 00:26:31.451 "compare": false, 00:26:31.451 "compare_and_write": false, 00:26:31.451 "abort": true, 00:26:31.451 "seek_hole": false, 00:26:31.451 "seek_data": false, 00:26:31.451 "copy": true, 00:26:31.451 "nvme_iov_md": false 00:26:31.451 }, 00:26:31.451 "memory_domains": [ 00:26:31.451 { 00:26:31.451 "dma_device_id": "system", 00:26:31.451 "dma_device_type": 1 00:26:31.451 }, 00:26:31.451 { 00:26:31.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.451 "dma_device_type": 2 00:26:31.451 } 00:26:31.451 ], 00:26:31.451 "driver_specific": {} 00:26:31.451 } 00:26:31.451 ] 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.451 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.710 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.710 "name": "Existed_Raid", 00:26:31.710 "uuid": "3a864260-6769-48df-baa2-c38e75a1b4c1", 00:26:31.710 "strip_size_kb": 64, 00:26:31.710 "state": "online", 00:26:31.710 "raid_level": "concat", 00:26:31.710 "superblock": true, 00:26:31.710 "num_base_bdevs": 4, 00:26:31.710 "num_base_bdevs_discovered": 4, 00:26:31.710 "num_base_bdevs_operational": 4, 00:26:31.710 "base_bdevs_list": [ 00:26:31.710 { 00:26:31.710 "name": "BaseBdev1", 00:26:31.710 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:31.710 "is_configured": true, 00:26:31.710 "data_offset": 2048, 00:26:31.710 "data_size": 63488 00:26:31.710 }, 00:26:31.710 { 00:26:31.710 "name": "BaseBdev2", 00:26:31.710 "uuid": "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9", 00:26:31.710 "is_configured": true, 00:26:31.710 "data_offset": 2048, 00:26:31.710 "data_size": 63488 00:26:31.710 }, 00:26:31.710 { 00:26:31.710 "name": "BaseBdev3", 00:26:31.710 "uuid": "98494941-7933-470d-9535-f4b1cc2c3b8b", 00:26:31.710 "is_configured": true, 00:26:31.710 "data_offset": 2048, 00:26:31.710 "data_size": 63488 00:26:31.710 }, 00:26:31.710 { 00:26:31.710 "name": "BaseBdev4", 00:26:31.710 "uuid": "7fae87f7-2b3e-4c3c-9b2c-ad5fa920adfa", 00:26:31.710 "is_configured": true, 00:26:31.710 "data_offset": 2048, 00:26:31.710 "data_size": 63488 00:26:31.710 } 00:26:31.710 ] 00:26:31.710 }' 00:26:31.710 08:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.710 08:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:32.647 [2024-07-12 08:53:07.767279] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.647 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:32.647 "name": "Existed_Raid", 00:26:32.647 "aliases": [ 00:26:32.647 "3a864260-6769-48df-baa2-c38e75a1b4c1" 00:26:32.647 ], 00:26:32.647 "product_name": "Raid Volume", 00:26:32.647 "block_size": 512, 00:26:32.647 "num_blocks": 253952, 00:26:32.647 "uuid": "3a864260-6769-48df-baa2-c38e75a1b4c1", 00:26:32.647 "assigned_rate_limits": { 00:26:32.647 "rw_ios_per_sec": 0, 00:26:32.647 "rw_mbytes_per_sec": 0, 00:26:32.647 "r_mbytes_per_sec": 0, 00:26:32.647 "w_mbytes_per_sec": 0 00:26:32.647 }, 00:26:32.647 "claimed": false, 00:26:32.647 "zoned": false, 00:26:32.647 "supported_io_types": { 00:26:32.647 "read": true, 00:26:32.647 "write": true, 00:26:32.647 "unmap": true, 00:26:32.647 "flush": true, 00:26:32.647 "reset": true, 00:26:32.647 "nvme_admin": false, 00:26:32.647 "nvme_io": false, 00:26:32.647 "nvme_io_md": false, 00:26:32.647 "write_zeroes": true, 00:26:32.647 "zcopy": false, 00:26:32.647 "get_zone_info": false, 00:26:32.647 "zone_management": false, 00:26:32.647 "zone_append": false, 00:26:32.647 "compare": false, 00:26:32.647 "compare_and_write": false, 00:26:32.647 "abort": false, 00:26:32.647 "seek_hole": false, 00:26:32.647 "seek_data": false, 00:26:32.647 "copy": false, 00:26:32.647 "nvme_iov_md": false 00:26:32.647 }, 00:26:32.647 "memory_domains": [ 00:26:32.647 { 00:26:32.647 "dma_device_id": "system", 00:26:32.647 "dma_device_type": 1 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.647 "dma_device_type": 2 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "dma_device_id": "system", 00:26:32.647 "dma_device_type": 1 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.647 "dma_device_type": 2 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "dma_device_id": "system", 00:26:32.647 "dma_device_type": 1 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.647 "dma_device_type": 2 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "dma_device_id": "system", 00:26:32.647 "dma_device_type": 1 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.647 "dma_device_type": 2 00:26:32.647 } 00:26:32.647 ], 00:26:32.647 "driver_specific": { 00:26:32.647 "raid": { 00:26:32.647 "uuid": "3a864260-6769-48df-baa2-c38e75a1b4c1", 00:26:32.647 "strip_size_kb": 64, 00:26:32.647 "state": "online", 00:26:32.647 "raid_level": "concat", 00:26:32.647 "superblock": true, 00:26:32.647 "num_base_bdevs": 4, 00:26:32.647 "num_base_bdevs_discovered": 4, 00:26:32.647 "num_base_bdevs_operational": 4, 00:26:32.647 "base_bdevs_list": [ 00:26:32.647 { 00:26:32.647 "name": "BaseBdev1", 00:26:32.647 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:32.647 "is_configured": true, 00:26:32.647 "data_offset": 2048, 00:26:32.647 "data_size": 63488 00:26:32.647 }, 00:26:32.647 { 00:26:32.647 "name": "BaseBdev2", 00:26:32.647 "uuid": "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9", 00:26:32.647 "is_configured": true, 00:26:32.647 "data_offset": 2048, 00:26:32.647 "data_size": 63488 00:26:32.647 }, 00:26:32.648 { 00:26:32.648 "name": "BaseBdev3", 00:26:32.648 "uuid": "98494941-7933-470d-9535-f4b1cc2c3b8b", 00:26:32.648 "is_configured": true, 00:26:32.648 "data_offset": 2048, 00:26:32.648 "data_size": 63488 00:26:32.648 }, 00:26:32.648 { 00:26:32.648 "name": "BaseBdev4", 00:26:32.648 "uuid": "7fae87f7-2b3e-4c3c-9b2c-ad5fa920adfa", 00:26:32.648 "is_configured": true, 00:26:32.648 "data_offset": 2048, 00:26:32.648 "data_size": 63488 00:26:32.648 } 00:26:32.648 ] 00:26:32.648 } 00:26:32.648 } 00:26:32.648 }' 00:26:32.648 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:32.648 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:32.648 BaseBdev2 00:26:32.648 BaseBdev3 00:26:32.648 BaseBdev4' 00:26:32.648 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:32.648 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:32.648 08:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:33.278 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.278 "name": "BaseBdev1", 00:26:33.278 "aliases": [ 00:26:33.278 "f25b0b2b-9148-430c-a1ee-63e7c3916665" 00:26:33.278 ], 00:26:33.278 "product_name": "Malloc disk", 00:26:33.278 "block_size": 512, 00:26:33.278 "num_blocks": 65536, 00:26:33.278 "uuid": "f25b0b2b-9148-430c-a1ee-63e7c3916665", 00:26:33.278 "assigned_rate_limits": { 00:26:33.278 "rw_ios_per_sec": 0, 00:26:33.278 "rw_mbytes_per_sec": 0, 00:26:33.278 "r_mbytes_per_sec": 0, 00:26:33.278 "w_mbytes_per_sec": 0 00:26:33.278 }, 00:26:33.278 "claimed": true, 00:26:33.278 "claim_type": "exclusive_write", 00:26:33.278 "zoned": false, 00:26:33.278 "supported_io_types": { 00:26:33.278 "read": true, 00:26:33.278 "write": true, 00:26:33.278 "unmap": true, 00:26:33.278 "flush": true, 00:26:33.278 "reset": true, 00:26:33.278 "nvme_admin": false, 00:26:33.278 "nvme_io": false, 00:26:33.278 "nvme_io_md": false, 00:26:33.278 "write_zeroes": true, 00:26:33.278 "zcopy": true, 00:26:33.278 "get_zone_info": false, 00:26:33.278 "zone_management": false, 00:26:33.278 "zone_append": false, 00:26:33.278 "compare": false, 00:26:33.278 "compare_and_write": false, 00:26:33.278 "abort": true, 00:26:33.278 "seek_hole": false, 00:26:33.278 "seek_data": false, 00:26:33.278 "copy": true, 00:26:33.278 "nvme_iov_md": false 00:26:33.278 }, 00:26:33.278 "memory_domains": [ 00:26:33.278 { 00:26:33.278 "dma_device_id": "system", 00:26:33.278 "dma_device_type": 1 00:26:33.278 }, 00:26:33.278 { 00:26:33.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.278 "dma_device_type": 2 00:26:33.278 } 00:26:33.278 ], 00:26:33.278 "driver_specific": {} 00:26:33.278 }' 00:26:33.278 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.278 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.278 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.278 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.278 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.278 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:33.279 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.279 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.279 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:33.279 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.537 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.537 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:33.537 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:33.537 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:33.537 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.796 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.796 "name": "BaseBdev2", 00:26:33.796 "aliases": [ 00:26:33.796 "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9" 00:26:33.796 ], 00:26:33.796 "product_name": "Malloc disk", 00:26:33.796 "block_size": 512, 00:26:33.796 "num_blocks": 65536, 00:26:33.796 "uuid": "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9", 00:26:33.796 "assigned_rate_limits": { 00:26:33.796 "rw_ios_per_sec": 0, 00:26:33.796 "rw_mbytes_per_sec": 0, 00:26:33.796 "r_mbytes_per_sec": 0, 00:26:33.796 "w_mbytes_per_sec": 0 00:26:33.796 }, 00:26:33.796 "claimed": true, 00:26:33.796 "claim_type": "exclusive_write", 00:26:33.796 "zoned": false, 00:26:33.796 "supported_io_types": { 00:26:33.796 "read": true, 00:26:33.796 "write": true, 00:26:33.796 "unmap": true, 00:26:33.796 "flush": true, 00:26:33.796 "reset": true, 00:26:33.796 "nvme_admin": false, 00:26:33.796 "nvme_io": false, 00:26:33.796 "nvme_io_md": false, 00:26:33.796 "write_zeroes": true, 00:26:33.796 "zcopy": true, 00:26:33.796 "get_zone_info": false, 00:26:33.796 "zone_management": false, 00:26:33.796 "zone_append": false, 00:26:33.796 "compare": false, 00:26:33.796 "compare_and_write": false, 00:26:33.796 "abort": true, 00:26:33.796 "seek_hole": false, 00:26:33.796 "seek_data": false, 00:26:33.796 "copy": true, 00:26:33.796 "nvme_iov_md": false 00:26:33.796 }, 00:26:33.796 "memory_domains": [ 00:26:33.796 { 00:26:33.796 "dma_device_id": "system", 00:26:33.796 "dma_device_type": 1 00:26:33.796 }, 00:26:33.796 { 00:26:33.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.796 "dma_device_type": 2 00:26:33.796 } 00:26:33.796 ], 00:26:33.796 "driver_specific": {} 00:26:33.796 }' 00:26:33.796 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.796 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.796 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.796 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.054 08:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.054 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:34.054 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.054 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.054 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.054 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.054 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.315 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.315 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:34.315 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:34.315 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:34.574 "name": "BaseBdev3", 00:26:34.574 "aliases": [ 00:26:34.574 "98494941-7933-470d-9535-f4b1cc2c3b8b" 00:26:34.574 ], 00:26:34.574 "product_name": "Malloc disk", 00:26:34.574 "block_size": 512, 00:26:34.574 "num_blocks": 65536, 00:26:34.574 "uuid": "98494941-7933-470d-9535-f4b1cc2c3b8b", 00:26:34.574 "assigned_rate_limits": { 00:26:34.574 "rw_ios_per_sec": 0, 00:26:34.574 "rw_mbytes_per_sec": 0, 00:26:34.574 "r_mbytes_per_sec": 0, 00:26:34.574 "w_mbytes_per_sec": 0 00:26:34.574 }, 00:26:34.574 "claimed": true, 00:26:34.574 "claim_type": "exclusive_write", 00:26:34.574 "zoned": false, 00:26:34.574 "supported_io_types": { 00:26:34.574 "read": true, 00:26:34.574 "write": true, 00:26:34.574 "unmap": true, 00:26:34.574 "flush": true, 00:26:34.574 "reset": true, 00:26:34.574 "nvme_admin": false, 00:26:34.574 "nvme_io": false, 00:26:34.574 "nvme_io_md": false, 00:26:34.574 "write_zeroes": true, 00:26:34.574 "zcopy": true, 00:26:34.574 "get_zone_info": false, 00:26:34.574 "zone_management": false, 00:26:34.574 "zone_append": false, 00:26:34.574 "compare": false, 00:26:34.574 "compare_and_write": false, 00:26:34.574 "abort": true, 00:26:34.574 "seek_hole": false, 00:26:34.574 "seek_data": false, 00:26:34.574 "copy": true, 00:26:34.574 "nvme_iov_md": false 00:26:34.574 }, 00:26:34.574 "memory_domains": [ 00:26:34.574 { 00:26:34.574 "dma_device_id": "system", 00:26:34.574 "dma_device_type": 1 00:26:34.574 }, 00:26:34.574 { 00:26:34.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.574 "dma_device_type": 2 00:26:34.574 } 00:26:34.574 ], 00:26:34.574 "driver_specific": {} 00:26:34.574 }' 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:34.574 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:34.833 08:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:35.400 "name": "BaseBdev4", 00:26:35.400 "aliases": [ 00:26:35.400 "7fae87f7-2b3e-4c3c-9b2c-ad5fa920adfa" 00:26:35.400 ], 00:26:35.400 "product_name": "Malloc disk", 00:26:35.400 "block_size": 512, 00:26:35.400 "num_blocks": 65536, 00:26:35.400 "uuid": "7fae87f7-2b3e-4c3c-9b2c-ad5fa920adfa", 00:26:35.400 "assigned_rate_limits": { 00:26:35.400 "rw_ios_per_sec": 0, 00:26:35.400 "rw_mbytes_per_sec": 0, 00:26:35.400 "r_mbytes_per_sec": 0, 00:26:35.400 "w_mbytes_per_sec": 0 00:26:35.400 }, 00:26:35.400 "claimed": true, 00:26:35.400 "claim_type": "exclusive_write", 00:26:35.400 "zoned": false, 00:26:35.400 "supported_io_types": { 00:26:35.400 "read": true, 00:26:35.400 "write": true, 00:26:35.400 "unmap": true, 00:26:35.400 "flush": true, 00:26:35.400 "reset": true, 00:26:35.400 "nvme_admin": false, 00:26:35.400 "nvme_io": false, 00:26:35.400 "nvme_io_md": false, 00:26:35.400 "write_zeroes": true, 00:26:35.400 "zcopy": true, 00:26:35.400 "get_zone_info": false, 00:26:35.400 "zone_management": false, 00:26:35.400 "zone_append": false, 00:26:35.400 "compare": false, 00:26:35.400 "compare_and_write": false, 00:26:35.400 "abort": true, 00:26:35.400 "seek_hole": false, 00:26:35.400 "seek_data": false, 00:26:35.400 "copy": true, 00:26:35.400 "nvme_iov_md": false 00:26:35.400 }, 00:26:35.400 "memory_domains": [ 00:26:35.400 { 00:26:35.400 "dma_device_id": "system", 00:26:35.400 "dma_device_type": 1 00:26:35.400 }, 00:26:35.400 { 00:26:35.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.400 "dma_device_type": 2 00:26:35.400 } 00:26:35.400 ], 00:26:35.400 "driver_specific": {} 00:26:35.400 }' 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:35.400 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:35.658 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:35.658 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:35.658 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:35.658 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:35.659 08:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:35.917 [2024-07-12 08:53:10.952290] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:35.917 [2024-07-12 08:53:10.952357] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:35.917 [2024-07-12 08:53:10.952441] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.917 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.176 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.176 "name": "Existed_Raid", 00:26:36.176 "uuid": "3a864260-6769-48df-baa2-c38e75a1b4c1", 00:26:36.176 "strip_size_kb": 64, 00:26:36.176 "state": "offline", 00:26:36.176 "raid_level": "concat", 00:26:36.176 "superblock": true, 00:26:36.176 "num_base_bdevs": 4, 00:26:36.176 "num_base_bdevs_discovered": 3, 00:26:36.176 "num_base_bdevs_operational": 3, 00:26:36.176 "base_bdevs_list": [ 00:26:36.176 { 00:26:36.176 "name": null, 00:26:36.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.176 "is_configured": false, 00:26:36.176 "data_offset": 2048, 00:26:36.176 "data_size": 63488 00:26:36.176 }, 00:26:36.176 { 00:26:36.176 "name": "BaseBdev2", 00:26:36.176 "uuid": "3ea3b5af-9d19-4c13-8ad0-479616f4e2f9", 00:26:36.176 "is_configured": true, 00:26:36.176 "data_offset": 2048, 00:26:36.176 "data_size": 63488 00:26:36.176 }, 00:26:36.176 { 00:26:36.176 "name": "BaseBdev3", 00:26:36.176 "uuid": "98494941-7933-470d-9535-f4b1cc2c3b8b", 00:26:36.176 "is_configured": true, 00:26:36.176 "data_offset": 2048, 00:26:36.176 "data_size": 63488 00:26:36.176 }, 00:26:36.176 { 00:26:36.176 "name": "BaseBdev4", 00:26:36.176 "uuid": "7fae87f7-2b3e-4c3c-9b2c-ad5fa920adfa", 00:26:36.176 "is_configured": true, 00:26:36.176 "data_offset": 2048, 00:26:36.176 "data_size": 63488 00:26:36.176 } 00:26:36.176 ] 00:26:36.176 }' 00:26:36.176 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.176 08:53:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.113 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:37.113 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:37.113 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.113 08:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:37.113 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:37.113 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:37.113 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:37.372 [2024-07-12 08:53:12.528709] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:37.631 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:37.631 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:37.631 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.631 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:37.889 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:37.889 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:37.889 08:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:37.889 [2024-07-12 08:53:13.081352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:38.148 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:38.148 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:38.148 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.148 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:38.405 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:38.405 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:38.405 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:38.663 [2024-07-12 08:53:13.646331] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:38.663 [2024-07-12 08:53:13.646414] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:26:38.663 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:38.663 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:38.663 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.663 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:38.922 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:38.922 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:38.922 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:38.922 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:38.922 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:38.922 08:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:39.180 BaseBdev2 00:26:39.180 08:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:39.180 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:39.180 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:39.180 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:39.180 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:39.180 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:39.180 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:39.439 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:39.698 [ 00:26:39.698 { 00:26:39.698 "name": "BaseBdev2", 00:26:39.698 "aliases": [ 00:26:39.698 "8bcb747b-6120-4fe7-a2fd-43f23f4b3554" 00:26:39.698 ], 00:26:39.698 "product_name": "Malloc disk", 00:26:39.698 "block_size": 512, 00:26:39.698 "num_blocks": 65536, 00:26:39.698 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:39.698 "assigned_rate_limits": { 00:26:39.698 "rw_ios_per_sec": 0, 00:26:39.698 "rw_mbytes_per_sec": 0, 00:26:39.698 "r_mbytes_per_sec": 0, 00:26:39.698 "w_mbytes_per_sec": 0 00:26:39.698 }, 00:26:39.698 "claimed": false, 00:26:39.698 "zoned": false, 00:26:39.698 "supported_io_types": { 00:26:39.698 "read": true, 00:26:39.698 "write": true, 00:26:39.698 "unmap": true, 00:26:39.698 "flush": true, 00:26:39.698 "reset": true, 00:26:39.698 "nvme_admin": false, 00:26:39.698 "nvme_io": false, 00:26:39.698 "nvme_io_md": false, 00:26:39.698 "write_zeroes": true, 00:26:39.698 "zcopy": true, 00:26:39.698 "get_zone_info": false, 00:26:39.698 "zone_management": false, 00:26:39.698 "zone_append": false, 00:26:39.698 "compare": false, 00:26:39.698 "compare_and_write": false, 00:26:39.698 "abort": true, 00:26:39.698 "seek_hole": false, 00:26:39.698 "seek_data": false, 00:26:39.698 "copy": true, 00:26:39.698 "nvme_iov_md": false 00:26:39.698 }, 00:26:39.698 "memory_domains": [ 00:26:39.698 { 00:26:39.698 "dma_device_id": "system", 00:26:39.698 "dma_device_type": 1 00:26:39.698 }, 00:26:39.698 { 00:26:39.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.698 "dma_device_type": 2 00:26:39.698 } 00:26:39.698 ], 00:26:39.698 "driver_specific": {} 00:26:39.698 } 00:26:39.698 ] 00:26:39.698 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:39.698 08:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:39.698 08:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:39.698 08:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:39.957 BaseBdev3 00:26:39.957 08:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:39.957 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:39.957 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:39.957 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:39.957 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:39.957 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:39.957 08:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:40.216 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:40.475 [ 00:26:40.475 { 00:26:40.475 "name": "BaseBdev3", 00:26:40.475 "aliases": [ 00:26:40.475 "330671e5-54d6-4e13-8a34-a0f9489061a2" 00:26:40.475 ], 00:26:40.475 "product_name": "Malloc disk", 00:26:40.475 "block_size": 512, 00:26:40.475 "num_blocks": 65536, 00:26:40.475 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:40.475 "assigned_rate_limits": { 00:26:40.475 "rw_ios_per_sec": 0, 00:26:40.475 "rw_mbytes_per_sec": 0, 00:26:40.475 "r_mbytes_per_sec": 0, 00:26:40.475 "w_mbytes_per_sec": 0 00:26:40.475 }, 00:26:40.475 "claimed": false, 00:26:40.475 "zoned": false, 00:26:40.475 "supported_io_types": { 00:26:40.475 "read": true, 00:26:40.475 "write": true, 00:26:40.475 "unmap": true, 00:26:40.475 "flush": true, 00:26:40.475 "reset": true, 00:26:40.475 "nvme_admin": false, 00:26:40.475 "nvme_io": false, 00:26:40.475 "nvme_io_md": false, 00:26:40.475 "write_zeroes": true, 00:26:40.475 "zcopy": true, 00:26:40.475 "get_zone_info": false, 00:26:40.475 "zone_management": false, 00:26:40.475 "zone_append": false, 00:26:40.475 "compare": false, 00:26:40.475 "compare_and_write": false, 00:26:40.475 "abort": true, 00:26:40.475 "seek_hole": false, 00:26:40.475 "seek_data": false, 00:26:40.475 "copy": true, 00:26:40.475 "nvme_iov_md": false 00:26:40.475 }, 00:26:40.475 "memory_domains": [ 00:26:40.475 { 00:26:40.475 "dma_device_id": "system", 00:26:40.475 "dma_device_type": 1 00:26:40.475 }, 00:26:40.475 { 00:26:40.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.475 "dma_device_type": 2 00:26:40.475 } 00:26:40.475 ], 00:26:40.475 "driver_specific": {} 00:26:40.475 } 00:26:40.475 ] 00:26:40.475 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:40.475 08:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:40.475 08:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:40.475 08:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:40.734 BaseBdev4 00:26:40.734 08:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:40.734 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:40.734 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:40.734 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:40.734 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:40.734 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:40.734 08:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:40.993 08:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:41.265 [ 00:26:41.265 { 00:26:41.265 "name": "BaseBdev4", 00:26:41.265 "aliases": [ 00:26:41.265 "d3633d8d-3a56-4cbe-b9a9-a21256702fdc" 00:26:41.265 ], 00:26:41.265 "product_name": "Malloc disk", 00:26:41.265 "block_size": 512, 00:26:41.265 "num_blocks": 65536, 00:26:41.265 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:41.265 "assigned_rate_limits": { 00:26:41.265 "rw_ios_per_sec": 0, 00:26:41.265 "rw_mbytes_per_sec": 0, 00:26:41.265 "r_mbytes_per_sec": 0, 00:26:41.265 "w_mbytes_per_sec": 0 00:26:41.265 }, 00:26:41.265 "claimed": false, 00:26:41.265 "zoned": false, 00:26:41.265 "supported_io_types": { 00:26:41.265 "read": true, 00:26:41.265 "write": true, 00:26:41.265 "unmap": true, 00:26:41.265 "flush": true, 00:26:41.265 "reset": true, 00:26:41.265 "nvme_admin": false, 00:26:41.265 "nvme_io": false, 00:26:41.265 "nvme_io_md": false, 00:26:41.265 "write_zeroes": true, 00:26:41.265 "zcopy": true, 00:26:41.265 "get_zone_info": false, 00:26:41.265 "zone_management": false, 00:26:41.265 "zone_append": false, 00:26:41.265 "compare": false, 00:26:41.265 "compare_and_write": false, 00:26:41.265 "abort": true, 00:26:41.265 "seek_hole": false, 00:26:41.265 "seek_data": false, 00:26:41.265 "copy": true, 00:26:41.265 "nvme_iov_md": false 00:26:41.265 }, 00:26:41.265 "memory_domains": [ 00:26:41.265 { 00:26:41.265 "dma_device_id": "system", 00:26:41.265 "dma_device_type": 1 00:26:41.265 }, 00:26:41.265 { 00:26:41.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.265 "dma_device_type": 2 00:26:41.265 } 00:26:41.265 ], 00:26:41.265 "driver_specific": {} 00:26:41.265 } 00:26:41.265 ] 00:26:41.265 08:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:41.265 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:41.265 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:41.265 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:41.524 [2024-07-12 08:53:16.518684] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:41.524 [2024-07-12 08:53:16.518799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:41.524 [2024-07-12 08:53:16.518855] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:41.524 [2024-07-12 08:53:16.521052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:41.524 [2024-07-12 08:53:16.521178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:41.524 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:41.524 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:41.524 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:41.524 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:41.524 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:41.525 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:41.525 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.525 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.525 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.525 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.525 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.525 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.784 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:41.784 "name": "Existed_Raid", 00:26:41.784 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:41.784 "strip_size_kb": 64, 00:26:41.784 "state": "configuring", 00:26:41.784 "raid_level": "concat", 00:26:41.784 "superblock": true, 00:26:41.784 "num_base_bdevs": 4, 00:26:41.784 "num_base_bdevs_discovered": 3, 00:26:41.784 "num_base_bdevs_operational": 4, 00:26:41.784 "base_bdevs_list": [ 00:26:41.784 { 00:26:41.784 "name": "BaseBdev1", 00:26:41.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.784 "is_configured": false, 00:26:41.784 "data_offset": 0, 00:26:41.784 "data_size": 0 00:26:41.784 }, 00:26:41.784 { 00:26:41.784 "name": "BaseBdev2", 00:26:41.784 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:41.784 "is_configured": true, 00:26:41.784 "data_offset": 2048, 00:26:41.784 "data_size": 63488 00:26:41.784 }, 00:26:41.784 { 00:26:41.784 "name": "BaseBdev3", 00:26:41.784 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:41.784 "is_configured": true, 00:26:41.784 "data_offset": 2048, 00:26:41.784 "data_size": 63488 00:26:41.784 }, 00:26:41.784 { 00:26:41.784 "name": "BaseBdev4", 00:26:41.784 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:41.784 "is_configured": true, 00:26:41.784 "data_offset": 2048, 00:26:41.784 "data_size": 63488 00:26:41.784 } 00:26:41.784 ] 00:26:41.784 }' 00:26:41.784 08:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:41.784 08:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.351 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:42.609 [2024-07-12 08:53:17.738906] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:42.609 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.610 08:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.869 08:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.869 "name": "Existed_Raid", 00:26:42.869 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:42.869 "strip_size_kb": 64, 00:26:42.869 "state": "configuring", 00:26:42.869 "raid_level": "concat", 00:26:42.869 "superblock": true, 00:26:42.869 "num_base_bdevs": 4, 00:26:42.869 "num_base_bdevs_discovered": 2, 00:26:42.869 "num_base_bdevs_operational": 4, 00:26:42.869 "base_bdevs_list": [ 00:26:42.869 { 00:26:42.869 "name": "BaseBdev1", 00:26:42.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.869 "is_configured": false, 00:26:42.869 "data_offset": 0, 00:26:42.869 "data_size": 0 00:26:42.869 }, 00:26:42.869 { 00:26:42.869 "name": null, 00:26:42.869 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:42.869 "is_configured": false, 00:26:42.869 "data_offset": 2048, 00:26:42.869 "data_size": 63488 00:26:42.869 }, 00:26:42.869 { 00:26:42.869 "name": "BaseBdev3", 00:26:42.869 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:42.869 "is_configured": true, 00:26:42.869 "data_offset": 2048, 00:26:42.869 "data_size": 63488 00:26:42.869 }, 00:26:42.869 { 00:26:42.869 "name": "BaseBdev4", 00:26:42.869 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:42.869 "is_configured": true, 00:26:42.869 "data_offset": 2048, 00:26:42.869 "data_size": 63488 00:26:42.869 } 00:26:42.869 ] 00:26:42.869 }' 00:26:42.869 08:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.869 08:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.816 08:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.816 08:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:43.816 08:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:43.816 08:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:44.105 [2024-07-12 08:53:19.265939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:44.105 BaseBdev1 00:26:44.105 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:44.105 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:44.105 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:44.105 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:44.105 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:44.105 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:44.105 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:44.364 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:44.624 [ 00:26:44.624 { 00:26:44.624 "name": "BaseBdev1", 00:26:44.624 "aliases": [ 00:26:44.624 "e42b7a22-8d91-432c-ba5f-3d39dc702a3b" 00:26:44.624 ], 00:26:44.624 "product_name": "Malloc disk", 00:26:44.624 "block_size": 512, 00:26:44.624 "num_blocks": 65536, 00:26:44.624 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:44.624 "assigned_rate_limits": { 00:26:44.624 "rw_ios_per_sec": 0, 00:26:44.624 "rw_mbytes_per_sec": 0, 00:26:44.624 "r_mbytes_per_sec": 0, 00:26:44.624 "w_mbytes_per_sec": 0 00:26:44.624 }, 00:26:44.624 "claimed": true, 00:26:44.624 "claim_type": "exclusive_write", 00:26:44.624 "zoned": false, 00:26:44.624 "supported_io_types": { 00:26:44.624 "read": true, 00:26:44.624 "write": true, 00:26:44.624 "unmap": true, 00:26:44.624 "flush": true, 00:26:44.624 "reset": true, 00:26:44.624 "nvme_admin": false, 00:26:44.624 "nvme_io": false, 00:26:44.624 "nvme_io_md": false, 00:26:44.624 "write_zeroes": true, 00:26:44.624 "zcopy": true, 00:26:44.624 "get_zone_info": false, 00:26:44.624 "zone_management": false, 00:26:44.624 "zone_append": false, 00:26:44.624 "compare": false, 00:26:44.624 "compare_and_write": false, 00:26:44.624 "abort": true, 00:26:44.624 "seek_hole": false, 00:26:44.624 "seek_data": false, 00:26:44.624 "copy": true, 00:26:44.624 "nvme_iov_md": false 00:26:44.624 }, 00:26:44.624 "memory_domains": [ 00:26:44.624 { 00:26:44.624 "dma_device_id": "system", 00:26:44.624 "dma_device_type": 1 00:26:44.624 }, 00:26:44.624 { 00:26:44.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.624 "dma_device_type": 2 00:26:44.624 } 00:26:44.624 ], 00:26:44.624 "driver_specific": {} 00:26:44.624 } 00:26:44.624 ] 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.624 08:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:44.883 08:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.883 "name": "Existed_Raid", 00:26:44.883 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:44.883 "strip_size_kb": 64, 00:26:44.883 "state": "configuring", 00:26:44.883 "raid_level": "concat", 00:26:44.883 "superblock": true, 00:26:44.883 "num_base_bdevs": 4, 00:26:44.883 "num_base_bdevs_discovered": 3, 00:26:44.883 "num_base_bdevs_operational": 4, 00:26:44.883 "base_bdevs_list": [ 00:26:44.883 { 00:26:44.883 "name": "BaseBdev1", 00:26:44.883 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:44.883 "is_configured": true, 00:26:44.883 "data_offset": 2048, 00:26:44.883 "data_size": 63488 00:26:44.883 }, 00:26:44.883 { 00:26:44.883 "name": null, 00:26:44.884 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:44.884 "is_configured": false, 00:26:44.884 "data_offset": 2048, 00:26:44.884 "data_size": 63488 00:26:44.884 }, 00:26:44.884 { 00:26:44.884 "name": "BaseBdev3", 00:26:44.884 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:44.884 "is_configured": true, 00:26:44.884 "data_offset": 2048, 00:26:44.884 "data_size": 63488 00:26:44.884 }, 00:26:44.884 { 00:26:44.884 "name": "BaseBdev4", 00:26:44.884 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:44.884 "is_configured": true, 00:26:44.884 "data_offset": 2048, 00:26:44.884 "data_size": 63488 00:26:44.884 } 00:26:44.884 ] 00:26:44.884 }' 00:26:44.884 08:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.884 08:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.820 08:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.820 08:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:46.077 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:46.077 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:46.335 [2024-07-12 08:53:21.326465] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.335 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:46.592 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.592 "name": "Existed_Raid", 00:26:46.592 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:46.592 "strip_size_kb": 64, 00:26:46.592 "state": "configuring", 00:26:46.592 "raid_level": "concat", 00:26:46.592 "superblock": true, 00:26:46.592 "num_base_bdevs": 4, 00:26:46.592 "num_base_bdevs_discovered": 2, 00:26:46.592 "num_base_bdevs_operational": 4, 00:26:46.592 "base_bdevs_list": [ 00:26:46.592 { 00:26:46.592 "name": "BaseBdev1", 00:26:46.592 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:46.592 "is_configured": true, 00:26:46.592 "data_offset": 2048, 00:26:46.592 "data_size": 63488 00:26:46.592 }, 00:26:46.592 { 00:26:46.592 "name": null, 00:26:46.592 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:46.592 "is_configured": false, 00:26:46.592 "data_offset": 2048, 00:26:46.592 "data_size": 63488 00:26:46.592 }, 00:26:46.592 { 00:26:46.592 "name": null, 00:26:46.592 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:46.592 "is_configured": false, 00:26:46.592 "data_offset": 2048, 00:26:46.592 "data_size": 63488 00:26:46.592 }, 00:26:46.592 { 00:26:46.592 "name": "BaseBdev4", 00:26:46.592 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:46.592 "is_configured": true, 00:26:46.592 "data_offset": 2048, 00:26:46.592 "data_size": 63488 00:26:46.592 } 00:26:46.592 ] 00:26:46.592 }' 00:26:46.592 08:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.592 08:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.157 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.157 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:47.722 [2024-07-12 08:53:22.850869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.722 08:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:47.980 08:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.980 "name": "Existed_Raid", 00:26:47.980 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:47.980 "strip_size_kb": 64, 00:26:47.980 "state": "configuring", 00:26:47.980 "raid_level": "concat", 00:26:47.980 "superblock": true, 00:26:47.980 "num_base_bdevs": 4, 00:26:47.980 "num_base_bdevs_discovered": 3, 00:26:47.980 "num_base_bdevs_operational": 4, 00:26:47.980 "base_bdevs_list": [ 00:26:47.980 { 00:26:47.980 "name": "BaseBdev1", 00:26:47.980 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:47.980 "is_configured": true, 00:26:47.980 "data_offset": 2048, 00:26:47.980 "data_size": 63488 00:26:47.980 }, 00:26:47.980 { 00:26:47.980 "name": null, 00:26:47.980 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:47.980 "is_configured": false, 00:26:47.980 "data_offset": 2048, 00:26:47.980 "data_size": 63488 00:26:47.980 }, 00:26:47.980 { 00:26:47.980 "name": "BaseBdev3", 00:26:47.980 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:47.980 "is_configured": true, 00:26:47.980 "data_offset": 2048, 00:26:47.980 "data_size": 63488 00:26:47.980 }, 00:26:47.980 { 00:26:47.980 "name": "BaseBdev4", 00:26:47.980 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:47.980 "is_configured": true, 00:26:47.980 "data_offset": 2048, 00:26:47.980 "data_size": 63488 00:26:47.980 } 00:26:47.980 ] 00:26:47.980 }' 00:26:47.980 08:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.980 08:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.911 08:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.911 08:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:48.911 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:48.911 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:49.168 [2024-07-12 08:53:24.320896] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.426 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.683 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.683 "name": "Existed_Raid", 00:26:49.683 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:49.683 "strip_size_kb": 64, 00:26:49.683 "state": "configuring", 00:26:49.683 "raid_level": "concat", 00:26:49.683 "superblock": true, 00:26:49.683 "num_base_bdevs": 4, 00:26:49.683 "num_base_bdevs_discovered": 2, 00:26:49.683 "num_base_bdevs_operational": 4, 00:26:49.683 "base_bdevs_list": [ 00:26:49.683 { 00:26:49.683 "name": null, 00:26:49.683 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:49.683 "is_configured": false, 00:26:49.683 "data_offset": 2048, 00:26:49.683 "data_size": 63488 00:26:49.683 }, 00:26:49.683 { 00:26:49.683 "name": null, 00:26:49.683 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:49.683 "is_configured": false, 00:26:49.683 "data_offset": 2048, 00:26:49.683 "data_size": 63488 00:26:49.683 }, 00:26:49.683 { 00:26:49.683 "name": "BaseBdev3", 00:26:49.683 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:49.683 "is_configured": true, 00:26:49.683 "data_offset": 2048, 00:26:49.683 "data_size": 63488 00:26:49.683 }, 00:26:49.683 { 00:26:49.683 "name": "BaseBdev4", 00:26:49.683 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:49.683 "is_configured": true, 00:26:49.683 "data_offset": 2048, 00:26:49.683 "data_size": 63488 00:26:49.683 } 00:26:49.683 ] 00:26:49.683 }' 00:26:49.683 08:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.683 08:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.248 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.248 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:50.505 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:50.505 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:50.762 [2024-07-12 08:53:25.858637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.762 08:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.020 08:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.020 "name": "Existed_Raid", 00:26:51.020 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:51.020 "strip_size_kb": 64, 00:26:51.020 "state": "configuring", 00:26:51.020 "raid_level": "concat", 00:26:51.020 "superblock": true, 00:26:51.020 "num_base_bdevs": 4, 00:26:51.020 "num_base_bdevs_discovered": 3, 00:26:51.020 "num_base_bdevs_operational": 4, 00:26:51.020 "base_bdevs_list": [ 00:26:51.020 { 00:26:51.020 "name": null, 00:26:51.020 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:51.020 "is_configured": false, 00:26:51.020 "data_offset": 2048, 00:26:51.020 "data_size": 63488 00:26:51.020 }, 00:26:51.020 { 00:26:51.020 "name": "BaseBdev2", 00:26:51.020 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:51.020 "is_configured": true, 00:26:51.020 "data_offset": 2048, 00:26:51.020 "data_size": 63488 00:26:51.020 }, 00:26:51.020 { 00:26:51.020 "name": "BaseBdev3", 00:26:51.020 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:51.020 "is_configured": true, 00:26:51.020 "data_offset": 2048, 00:26:51.020 "data_size": 63488 00:26:51.020 }, 00:26:51.020 { 00:26:51.020 "name": "BaseBdev4", 00:26:51.020 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:51.020 "is_configured": true, 00:26:51.020 "data_offset": 2048, 00:26:51.020 "data_size": 63488 00:26:51.020 } 00:26:51.020 ] 00:26:51.020 }' 00:26:51.020 08:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.020 08:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.010 08:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.010 08:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:52.010 08:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:52.010 08:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.010 08:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:52.267 08:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e42b7a22-8d91-432c-ba5f-3d39dc702a3b 00:26:52.832 [2024-07-12 08:53:27.720371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:52.832 [2024-07-12 08:53:27.720687] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:52.832 [2024-07-12 08:53:27.720704] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:52.832 [2024-07-12 08:53:27.720845] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:52.832 NewBaseBdev 00:26:52.832 [2024-07-12 08:53:27.721276] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:52.832 [2024-07-12 08:53:27.721305] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:26:52.832 [2024-07-12 08:53:27.721465] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:52.832 08:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:53.090 [ 00:26:53.090 { 00:26:53.090 "name": "NewBaseBdev", 00:26:53.090 "aliases": [ 00:26:53.090 "e42b7a22-8d91-432c-ba5f-3d39dc702a3b" 00:26:53.090 ], 00:26:53.090 "product_name": "Malloc disk", 00:26:53.090 "block_size": 512, 00:26:53.090 "num_blocks": 65536, 00:26:53.090 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:53.090 "assigned_rate_limits": { 00:26:53.090 "rw_ios_per_sec": 0, 00:26:53.090 "rw_mbytes_per_sec": 0, 00:26:53.090 "r_mbytes_per_sec": 0, 00:26:53.090 "w_mbytes_per_sec": 0 00:26:53.090 }, 00:26:53.090 "claimed": true, 00:26:53.090 "claim_type": "exclusive_write", 00:26:53.090 "zoned": false, 00:26:53.090 "supported_io_types": { 00:26:53.090 "read": true, 00:26:53.090 "write": true, 00:26:53.090 "unmap": true, 00:26:53.090 "flush": true, 00:26:53.090 "reset": true, 00:26:53.090 "nvme_admin": false, 00:26:53.090 "nvme_io": false, 00:26:53.090 "nvme_io_md": false, 00:26:53.090 "write_zeroes": true, 00:26:53.090 "zcopy": true, 00:26:53.090 "get_zone_info": false, 00:26:53.090 "zone_management": false, 00:26:53.090 "zone_append": false, 00:26:53.090 "compare": false, 00:26:53.090 "compare_and_write": false, 00:26:53.090 "abort": true, 00:26:53.090 "seek_hole": false, 00:26:53.090 "seek_data": false, 00:26:53.090 "copy": true, 00:26:53.090 "nvme_iov_md": false 00:26:53.090 }, 00:26:53.090 "memory_domains": [ 00:26:53.090 { 00:26:53.090 "dma_device_id": "system", 00:26:53.090 "dma_device_type": 1 00:26:53.090 }, 00:26:53.090 { 00:26:53.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.090 "dma_device_type": 2 00:26:53.090 } 00:26:53.090 ], 00:26:53.090 "driver_specific": {} 00:26:53.090 } 00:26:53.090 ] 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.090 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.350 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:53.350 "name": "Existed_Raid", 00:26:53.350 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:53.350 "strip_size_kb": 64, 00:26:53.350 "state": "online", 00:26:53.350 "raid_level": "concat", 00:26:53.350 "superblock": true, 00:26:53.350 "num_base_bdevs": 4, 00:26:53.350 "num_base_bdevs_discovered": 4, 00:26:53.350 "num_base_bdevs_operational": 4, 00:26:53.350 "base_bdevs_list": [ 00:26:53.350 { 00:26:53.350 "name": "NewBaseBdev", 00:26:53.350 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:53.350 "is_configured": true, 00:26:53.350 "data_offset": 2048, 00:26:53.350 "data_size": 63488 00:26:53.350 }, 00:26:53.350 { 00:26:53.350 "name": "BaseBdev2", 00:26:53.350 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:53.350 "is_configured": true, 00:26:53.350 "data_offset": 2048, 00:26:53.350 "data_size": 63488 00:26:53.350 }, 00:26:53.350 { 00:26:53.350 "name": "BaseBdev3", 00:26:53.350 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:53.350 "is_configured": true, 00:26:53.350 "data_offset": 2048, 00:26:53.350 "data_size": 63488 00:26:53.350 }, 00:26:53.350 { 00:26:53.350 "name": "BaseBdev4", 00:26:53.350 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:53.350 "is_configured": true, 00:26:53.350 "data_offset": 2048, 00:26:53.350 "data_size": 63488 00:26:53.350 } 00:26:53.350 ] 00:26:53.350 }' 00:26:53.350 08:53:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:53.350 08:53:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:54.315 [2024-07-12 08:53:29.403567] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:54.315 "name": "Existed_Raid", 00:26:54.315 "aliases": [ 00:26:54.315 "0ae3dc3b-d891-423e-a7fb-9411d8712612" 00:26:54.315 ], 00:26:54.315 "product_name": "Raid Volume", 00:26:54.315 "block_size": 512, 00:26:54.315 "num_blocks": 253952, 00:26:54.315 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:54.315 "assigned_rate_limits": { 00:26:54.315 "rw_ios_per_sec": 0, 00:26:54.315 "rw_mbytes_per_sec": 0, 00:26:54.315 "r_mbytes_per_sec": 0, 00:26:54.315 "w_mbytes_per_sec": 0 00:26:54.315 }, 00:26:54.315 "claimed": false, 00:26:54.315 "zoned": false, 00:26:54.315 "supported_io_types": { 00:26:54.315 "read": true, 00:26:54.315 "write": true, 00:26:54.315 "unmap": true, 00:26:54.315 "flush": true, 00:26:54.315 "reset": true, 00:26:54.315 "nvme_admin": false, 00:26:54.315 "nvme_io": false, 00:26:54.315 "nvme_io_md": false, 00:26:54.315 "write_zeroes": true, 00:26:54.315 "zcopy": false, 00:26:54.315 "get_zone_info": false, 00:26:54.315 "zone_management": false, 00:26:54.315 "zone_append": false, 00:26:54.315 "compare": false, 00:26:54.315 "compare_and_write": false, 00:26:54.315 "abort": false, 00:26:54.315 "seek_hole": false, 00:26:54.315 "seek_data": false, 00:26:54.315 "copy": false, 00:26:54.315 "nvme_iov_md": false 00:26:54.315 }, 00:26:54.315 "memory_domains": [ 00:26:54.315 { 00:26:54.315 "dma_device_id": "system", 00:26:54.315 "dma_device_type": 1 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.315 "dma_device_type": 2 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "dma_device_id": "system", 00:26:54.315 "dma_device_type": 1 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.315 "dma_device_type": 2 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "dma_device_id": "system", 00:26:54.315 "dma_device_type": 1 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.315 "dma_device_type": 2 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "dma_device_id": "system", 00:26:54.315 "dma_device_type": 1 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.315 "dma_device_type": 2 00:26:54.315 } 00:26:54.315 ], 00:26:54.315 "driver_specific": { 00:26:54.315 "raid": { 00:26:54.315 "uuid": "0ae3dc3b-d891-423e-a7fb-9411d8712612", 00:26:54.315 "strip_size_kb": 64, 00:26:54.315 "state": "online", 00:26:54.315 "raid_level": "concat", 00:26:54.315 "superblock": true, 00:26:54.315 "num_base_bdevs": 4, 00:26:54.315 "num_base_bdevs_discovered": 4, 00:26:54.315 "num_base_bdevs_operational": 4, 00:26:54.315 "base_bdevs_list": [ 00:26:54.315 { 00:26:54.315 "name": "NewBaseBdev", 00:26:54.315 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:54.315 "is_configured": true, 00:26:54.315 "data_offset": 2048, 00:26:54.315 "data_size": 63488 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "name": "BaseBdev2", 00:26:54.315 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:54.315 "is_configured": true, 00:26:54.315 "data_offset": 2048, 00:26:54.315 "data_size": 63488 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "name": "BaseBdev3", 00:26:54.315 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:54.315 "is_configured": true, 00:26:54.315 "data_offset": 2048, 00:26:54.315 "data_size": 63488 00:26:54.315 }, 00:26:54.315 { 00:26:54.315 "name": "BaseBdev4", 00:26:54.315 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:54.315 "is_configured": true, 00:26:54.315 "data_offset": 2048, 00:26:54.315 "data_size": 63488 00:26:54.315 } 00:26:54.315 ] 00:26:54.315 } 00:26:54.315 } 00:26:54.315 }' 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:54.315 BaseBdev2 00:26:54.315 BaseBdev3 00:26:54.315 BaseBdev4' 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:54.315 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:54.574 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:54.574 "name": "NewBaseBdev", 00:26:54.574 "aliases": [ 00:26:54.574 "e42b7a22-8d91-432c-ba5f-3d39dc702a3b" 00:26:54.574 ], 00:26:54.574 "product_name": "Malloc disk", 00:26:54.574 "block_size": 512, 00:26:54.574 "num_blocks": 65536, 00:26:54.574 "uuid": "e42b7a22-8d91-432c-ba5f-3d39dc702a3b", 00:26:54.574 "assigned_rate_limits": { 00:26:54.574 "rw_ios_per_sec": 0, 00:26:54.574 "rw_mbytes_per_sec": 0, 00:26:54.574 "r_mbytes_per_sec": 0, 00:26:54.574 "w_mbytes_per_sec": 0 00:26:54.574 }, 00:26:54.574 "claimed": true, 00:26:54.574 "claim_type": "exclusive_write", 00:26:54.574 "zoned": false, 00:26:54.574 "supported_io_types": { 00:26:54.574 "read": true, 00:26:54.574 "write": true, 00:26:54.574 "unmap": true, 00:26:54.574 "flush": true, 00:26:54.574 "reset": true, 00:26:54.574 "nvme_admin": false, 00:26:54.574 "nvme_io": false, 00:26:54.574 "nvme_io_md": false, 00:26:54.574 "write_zeroes": true, 00:26:54.574 "zcopy": true, 00:26:54.574 "get_zone_info": false, 00:26:54.574 "zone_management": false, 00:26:54.574 "zone_append": false, 00:26:54.574 "compare": false, 00:26:54.574 "compare_and_write": false, 00:26:54.574 "abort": true, 00:26:54.574 "seek_hole": false, 00:26:54.574 "seek_data": false, 00:26:54.574 "copy": true, 00:26:54.574 "nvme_iov_md": false 00:26:54.574 }, 00:26:54.574 "memory_domains": [ 00:26:54.574 { 00:26:54.574 "dma_device_id": "system", 00:26:54.574 "dma_device_type": 1 00:26:54.574 }, 00:26:54.574 { 00:26:54.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.574 "dma_device_type": 2 00:26:54.574 } 00:26:54.574 ], 00:26:54.574 "driver_specific": {} 00:26:54.574 }' 00:26:54.574 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.574 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.832 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:54.832 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.832 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.832 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:54.832 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.832 08:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.090 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:55.090 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.090 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.090 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:55.090 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.090 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:55.090 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:55.349 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:55.349 "name": "BaseBdev2", 00:26:55.349 "aliases": [ 00:26:55.349 "8bcb747b-6120-4fe7-a2fd-43f23f4b3554" 00:26:55.349 ], 00:26:55.349 "product_name": "Malloc disk", 00:26:55.349 "block_size": 512, 00:26:55.349 "num_blocks": 65536, 00:26:55.349 "uuid": "8bcb747b-6120-4fe7-a2fd-43f23f4b3554", 00:26:55.349 "assigned_rate_limits": { 00:26:55.349 "rw_ios_per_sec": 0, 00:26:55.349 "rw_mbytes_per_sec": 0, 00:26:55.349 "r_mbytes_per_sec": 0, 00:26:55.349 "w_mbytes_per_sec": 0 00:26:55.349 }, 00:26:55.349 "claimed": true, 00:26:55.349 "claim_type": "exclusive_write", 00:26:55.349 "zoned": false, 00:26:55.349 "supported_io_types": { 00:26:55.349 "read": true, 00:26:55.349 "write": true, 00:26:55.349 "unmap": true, 00:26:55.349 "flush": true, 00:26:55.349 "reset": true, 00:26:55.349 "nvme_admin": false, 00:26:55.349 "nvme_io": false, 00:26:55.349 "nvme_io_md": false, 00:26:55.349 "write_zeroes": true, 00:26:55.349 "zcopy": true, 00:26:55.349 "get_zone_info": false, 00:26:55.349 "zone_management": false, 00:26:55.349 "zone_append": false, 00:26:55.349 "compare": false, 00:26:55.349 "compare_and_write": false, 00:26:55.349 "abort": true, 00:26:55.349 "seek_hole": false, 00:26:55.349 "seek_data": false, 00:26:55.349 "copy": true, 00:26:55.349 "nvme_iov_md": false 00:26:55.349 }, 00:26:55.349 "memory_domains": [ 00:26:55.349 { 00:26:55.349 "dma_device_id": "system", 00:26:55.349 "dma_device_type": 1 00:26:55.349 }, 00:26:55.349 { 00:26:55.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.349 "dma_device_type": 2 00:26:55.349 } 00:26:55.349 ], 00:26:55.349 "driver_specific": {} 00:26:55.349 }' 00:26:55.349 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.349 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.349 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:55.349 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.607 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.607 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:55.607 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.607 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.607 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:55.607 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.865 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.865 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:55.865 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.865 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:55.865 08:53:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:56.122 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:56.123 "name": "BaseBdev3", 00:26:56.123 "aliases": [ 00:26:56.123 "330671e5-54d6-4e13-8a34-a0f9489061a2" 00:26:56.123 ], 00:26:56.123 "product_name": "Malloc disk", 00:26:56.123 "block_size": 512, 00:26:56.123 "num_blocks": 65536, 00:26:56.123 "uuid": "330671e5-54d6-4e13-8a34-a0f9489061a2", 00:26:56.123 "assigned_rate_limits": { 00:26:56.123 "rw_ios_per_sec": 0, 00:26:56.123 "rw_mbytes_per_sec": 0, 00:26:56.123 "r_mbytes_per_sec": 0, 00:26:56.123 "w_mbytes_per_sec": 0 00:26:56.123 }, 00:26:56.123 "claimed": true, 00:26:56.123 "claim_type": "exclusive_write", 00:26:56.123 "zoned": false, 00:26:56.123 "supported_io_types": { 00:26:56.123 "read": true, 00:26:56.123 "write": true, 00:26:56.123 "unmap": true, 00:26:56.123 "flush": true, 00:26:56.123 "reset": true, 00:26:56.123 "nvme_admin": false, 00:26:56.123 "nvme_io": false, 00:26:56.123 "nvme_io_md": false, 00:26:56.123 "write_zeroes": true, 00:26:56.123 "zcopy": true, 00:26:56.123 "get_zone_info": false, 00:26:56.123 "zone_management": false, 00:26:56.123 "zone_append": false, 00:26:56.123 "compare": false, 00:26:56.123 "compare_and_write": false, 00:26:56.123 "abort": true, 00:26:56.123 "seek_hole": false, 00:26:56.123 "seek_data": false, 00:26:56.123 "copy": true, 00:26:56.123 "nvme_iov_md": false 00:26:56.123 }, 00:26:56.123 "memory_domains": [ 00:26:56.123 { 00:26:56.123 "dma_device_id": "system", 00:26:56.123 "dma_device_type": 1 00:26:56.123 }, 00:26:56.123 { 00:26:56.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.123 "dma_device_type": 2 00:26:56.123 } 00:26:56.123 ], 00:26:56.123 "driver_specific": {} 00:26:56.123 }' 00:26:56.123 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.123 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.123 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:56.123 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.123 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.381 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:56.381 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.381 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.381 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:56.381 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.381 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.639 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:56.639 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:56.639 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:56.639 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:56.639 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:56.639 "name": "BaseBdev4", 00:26:56.639 "aliases": [ 00:26:56.639 "d3633d8d-3a56-4cbe-b9a9-a21256702fdc" 00:26:56.639 ], 00:26:56.639 "product_name": "Malloc disk", 00:26:56.639 "block_size": 512, 00:26:56.639 "num_blocks": 65536, 00:26:56.639 "uuid": "d3633d8d-3a56-4cbe-b9a9-a21256702fdc", 00:26:56.639 "assigned_rate_limits": { 00:26:56.639 "rw_ios_per_sec": 0, 00:26:56.639 "rw_mbytes_per_sec": 0, 00:26:56.639 "r_mbytes_per_sec": 0, 00:26:56.639 "w_mbytes_per_sec": 0 00:26:56.639 }, 00:26:56.639 "claimed": true, 00:26:56.639 "claim_type": "exclusive_write", 00:26:56.639 "zoned": false, 00:26:56.639 "supported_io_types": { 00:26:56.639 "read": true, 00:26:56.639 "write": true, 00:26:56.639 "unmap": true, 00:26:56.639 "flush": true, 00:26:56.639 "reset": true, 00:26:56.639 "nvme_admin": false, 00:26:56.639 "nvme_io": false, 00:26:56.639 "nvme_io_md": false, 00:26:56.639 "write_zeroes": true, 00:26:56.639 "zcopy": true, 00:26:56.639 "get_zone_info": false, 00:26:56.639 "zone_management": false, 00:26:56.639 "zone_append": false, 00:26:56.639 "compare": false, 00:26:56.639 "compare_and_write": false, 00:26:56.639 "abort": true, 00:26:56.639 "seek_hole": false, 00:26:56.639 "seek_data": false, 00:26:56.639 "copy": true, 00:26:56.639 "nvme_iov_md": false 00:26:56.639 }, 00:26:56.639 "memory_domains": [ 00:26:56.639 { 00:26:56.639 "dma_device_id": "system", 00:26:56.639 "dma_device_type": 1 00:26:56.639 }, 00:26:56.639 { 00:26:56.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.639 "dma_device_type": 2 00:26:56.639 } 00:26:56.639 ], 00:26:56.639 "driver_specific": {} 00:26:56.639 }' 00:26:56.639 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.899 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.899 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:56.899 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.899 08:53:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.899 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:56.899 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.157 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.157 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:57.157 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.157 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.157 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:57.157 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:57.415 [2024-07-12 08:53:32.551975] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:57.415 [2024-07-12 08:53:32.552014] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:57.415 [2024-07-12 08:53:32.552155] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:57.415 [2024-07-12 08:53:32.552256] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:57.415 [2024-07-12 08:53:32.552270] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 140206 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 140206 ']' 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 140206 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140206 00:26:57.415 killing process with pid 140206 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140206' 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 140206 00:26:57.415 08:53:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 140206 00:26:57.415 [2024-07-12 08:53:32.586742] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:57.988 [2024-07-12 08:53:32.907258] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:58.931 ************************************ 00:26:58.931 END TEST raid_state_function_test_sb 00:26:58.931 ************************************ 00:26:58.931 08:53:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:58.931 00:26:58.931 real 0m37.185s 00:26:58.931 user 1m9.816s 00:26:58.931 sys 0m4.216s 00:26:58.931 08:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.931 08:53:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.931 08:53:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:58.931 08:53:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:26:58.931 08:53:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:58.931 08:53:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.931 08:53:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:58.931 ************************************ 00:26:58.931 START TEST raid_superblock_test 00:26:58.931 ************************************ 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=141388 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 141388 /var/tmp/spdk-raid.sock 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 141388 ']' 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:58.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.931 08:53:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.931 [2024-07-12 08:53:34.085212] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:26:58.931 [2024-07-12 08:53:34.085457] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141388 ] 00:26:59.190 [2024-07-12 08:53:34.251405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.448 [2024-07-12 08:53:34.453440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.448 [2024-07-12 08:53:34.635974] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:00.014 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:27:00.273 malloc1 00:27:00.273 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:00.531 [2024-07-12 08:53:35.631951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:00.531 [2024-07-12 08:53:35.632116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.531 [2024-07-12 08:53:35.632159] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:00.531 [2024-07-12 08:53:35.632182] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.531 [2024-07-12 08:53:35.634735] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.531 [2024-07-12 08:53:35.634806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:00.531 pt1 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:00.531 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:27:00.790 malloc2 00:27:00.790 08:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:01.357 [2024-07-12 08:53:36.249814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:01.357 [2024-07-12 08:53:36.249991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.357 [2024-07-12 08:53:36.250033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:27:01.357 [2024-07-12 08:53:36.250056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.357 [2024-07-12 08:53:36.252523] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.357 [2024-07-12 08:53:36.252577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:01.357 pt2 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:01.357 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:27:01.357 malloc3 00:27:01.615 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:01.615 [2024-07-12 08:53:36.794922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:01.615 [2024-07-12 08:53:36.795086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.615 [2024-07-12 08:53:36.795127] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:27:01.615 [2024-07-12 08:53:36.795157] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.615 [2024-07-12 08:53:36.797685] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.615 [2024-07-12 08:53:36.797766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:01.615 pt3 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:01.873 08:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:27:01.873 malloc4 00:27:02.132 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:02.132 [2024-07-12 08:53:37.299796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:02.132 [2024-07-12 08:53:37.299928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.132 [2024-07-12 08:53:37.299969] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:02.132 [2024-07-12 08:53:37.299998] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.132 [2024-07-12 08:53:37.302665] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.132 [2024-07-12 08:53:37.302742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:02.132 pt4 00:27:02.132 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:27:02.132 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:27:02.132 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:27:02.391 [2024-07-12 08:53:37.519923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:02.391 [2024-07-12 08:53:37.522136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:02.391 [2024-07-12 08:53:37.522243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:02.391 [2024-07-12 08:53:37.522308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:02.391 [2024-07-12 08:53:37.522617] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:27:02.391 [2024-07-12 08:53:37.522644] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:02.391 [2024-07-12 08:53:37.522833] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:02.391 [2024-07-12 08:53:37.523238] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:27:02.391 [2024-07-12 08:53:37.523278] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:27:02.391 [2024-07-12 08:53:37.523463] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.391 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.650 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.650 "name": "raid_bdev1", 00:27:02.650 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:02.650 "strip_size_kb": 64, 00:27:02.650 "state": "online", 00:27:02.650 "raid_level": "concat", 00:27:02.650 "superblock": true, 00:27:02.650 "num_base_bdevs": 4, 00:27:02.650 "num_base_bdevs_discovered": 4, 00:27:02.650 "num_base_bdevs_operational": 4, 00:27:02.650 "base_bdevs_list": [ 00:27:02.650 { 00:27:02.650 "name": "pt1", 00:27:02.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.650 "is_configured": true, 00:27:02.650 "data_offset": 2048, 00:27:02.650 "data_size": 63488 00:27:02.650 }, 00:27:02.650 { 00:27:02.650 "name": "pt2", 00:27:02.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.650 "is_configured": true, 00:27:02.650 "data_offset": 2048, 00:27:02.650 "data_size": 63488 00:27:02.650 }, 00:27:02.650 { 00:27:02.650 "name": "pt3", 00:27:02.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.650 "is_configured": true, 00:27:02.650 "data_offset": 2048, 00:27:02.650 "data_size": 63488 00:27:02.650 }, 00:27:02.650 { 00:27:02.650 "name": "pt4", 00:27:02.650 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:02.650 "is_configured": true, 00:27:02.650 "data_offset": 2048, 00:27:02.650 "data_size": 63488 00:27:02.650 } 00:27:02.650 ] 00:27:02.650 }' 00:27:02.650 08:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.650 08:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:03.637 [2024-07-12 08:53:38.700490] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:03.637 "name": "raid_bdev1", 00:27:03.637 "aliases": [ 00:27:03.637 "0754aed9-5075-495d-b08f-c05f853f2b85" 00:27:03.637 ], 00:27:03.637 "product_name": "Raid Volume", 00:27:03.637 "block_size": 512, 00:27:03.637 "num_blocks": 253952, 00:27:03.637 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:03.637 "assigned_rate_limits": { 00:27:03.637 "rw_ios_per_sec": 0, 00:27:03.637 "rw_mbytes_per_sec": 0, 00:27:03.637 "r_mbytes_per_sec": 0, 00:27:03.637 "w_mbytes_per_sec": 0 00:27:03.637 }, 00:27:03.637 "claimed": false, 00:27:03.637 "zoned": false, 00:27:03.637 "supported_io_types": { 00:27:03.637 "read": true, 00:27:03.637 "write": true, 00:27:03.637 "unmap": true, 00:27:03.637 "flush": true, 00:27:03.637 "reset": true, 00:27:03.637 "nvme_admin": false, 00:27:03.637 "nvme_io": false, 00:27:03.637 "nvme_io_md": false, 00:27:03.637 "write_zeroes": true, 00:27:03.637 "zcopy": false, 00:27:03.637 "get_zone_info": false, 00:27:03.637 "zone_management": false, 00:27:03.637 "zone_append": false, 00:27:03.637 "compare": false, 00:27:03.637 "compare_and_write": false, 00:27:03.637 "abort": false, 00:27:03.637 "seek_hole": false, 00:27:03.637 "seek_data": false, 00:27:03.637 "copy": false, 00:27:03.637 "nvme_iov_md": false 00:27:03.637 }, 00:27:03.637 "memory_domains": [ 00:27:03.637 { 00:27:03.637 "dma_device_id": "system", 00:27:03.637 "dma_device_type": 1 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.637 "dma_device_type": 2 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "dma_device_id": "system", 00:27:03.637 "dma_device_type": 1 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.637 "dma_device_type": 2 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "dma_device_id": "system", 00:27:03.637 "dma_device_type": 1 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.637 "dma_device_type": 2 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "dma_device_id": "system", 00:27:03.637 "dma_device_type": 1 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.637 "dma_device_type": 2 00:27:03.637 } 00:27:03.637 ], 00:27:03.637 "driver_specific": { 00:27:03.637 "raid": { 00:27:03.637 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:03.637 "strip_size_kb": 64, 00:27:03.637 "state": "online", 00:27:03.637 "raid_level": "concat", 00:27:03.637 "superblock": true, 00:27:03.637 "num_base_bdevs": 4, 00:27:03.637 "num_base_bdevs_discovered": 4, 00:27:03.637 "num_base_bdevs_operational": 4, 00:27:03.637 "base_bdevs_list": [ 00:27:03.637 { 00:27:03.637 "name": "pt1", 00:27:03.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.637 "is_configured": true, 00:27:03.637 "data_offset": 2048, 00:27:03.637 "data_size": 63488 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "name": "pt2", 00:27:03.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.637 "is_configured": true, 00:27:03.637 "data_offset": 2048, 00:27:03.637 "data_size": 63488 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "name": "pt3", 00:27:03.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:03.637 "is_configured": true, 00:27:03.637 "data_offset": 2048, 00:27:03.637 "data_size": 63488 00:27:03.637 }, 00:27:03.637 { 00:27:03.637 "name": "pt4", 00:27:03.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:03.637 "is_configured": true, 00:27:03.637 "data_offset": 2048, 00:27:03.637 "data_size": 63488 00:27:03.637 } 00:27:03.637 ] 00:27:03.637 } 00:27:03.637 } 00:27:03.637 }' 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:03.637 pt2 00:27:03.637 pt3 00:27:03.637 pt4' 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:03.637 08:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:03.896 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:03.896 "name": "pt1", 00:27:03.896 "aliases": [ 00:27:03.896 "00000000-0000-0000-0000-000000000001" 00:27:03.896 ], 00:27:03.896 "product_name": "passthru", 00:27:03.896 "block_size": 512, 00:27:03.896 "num_blocks": 65536, 00:27:03.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:03.896 "assigned_rate_limits": { 00:27:03.896 "rw_ios_per_sec": 0, 00:27:03.896 "rw_mbytes_per_sec": 0, 00:27:03.896 "r_mbytes_per_sec": 0, 00:27:03.896 "w_mbytes_per_sec": 0 00:27:03.896 }, 00:27:03.896 "claimed": true, 00:27:03.896 "claim_type": "exclusive_write", 00:27:03.896 "zoned": false, 00:27:03.896 "supported_io_types": { 00:27:03.896 "read": true, 00:27:03.896 "write": true, 00:27:03.896 "unmap": true, 00:27:03.896 "flush": true, 00:27:03.896 "reset": true, 00:27:03.896 "nvme_admin": false, 00:27:03.896 "nvme_io": false, 00:27:03.896 "nvme_io_md": false, 00:27:03.896 "write_zeroes": true, 00:27:03.896 "zcopy": true, 00:27:03.896 "get_zone_info": false, 00:27:03.896 "zone_management": false, 00:27:03.896 "zone_append": false, 00:27:03.896 "compare": false, 00:27:03.896 "compare_and_write": false, 00:27:03.896 "abort": true, 00:27:03.896 "seek_hole": false, 00:27:03.896 "seek_data": false, 00:27:03.896 "copy": true, 00:27:03.896 "nvme_iov_md": false 00:27:03.896 }, 00:27:03.896 "memory_domains": [ 00:27:03.896 { 00:27:03.896 "dma_device_id": "system", 00:27:03.896 "dma_device_type": 1 00:27:03.896 }, 00:27:03.896 { 00:27:03.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.896 "dma_device_type": 2 00:27:03.896 } 00:27:03.896 ], 00:27:03.896 "driver_specific": { 00:27:03.896 "passthru": { 00:27:03.896 "name": "pt1", 00:27:03.896 "base_bdev_name": "malloc1" 00:27:03.896 } 00:27:03.896 } 00:27:03.896 }' 00:27:03.896 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.154 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.154 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:04.154 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.154 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.154 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:04.154 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.154 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.413 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:04.413 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.413 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.413 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:04.413 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:04.413 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:04.413 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.671 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.671 "name": "pt2", 00:27:04.671 "aliases": [ 00:27:04.671 "00000000-0000-0000-0000-000000000002" 00:27:04.671 ], 00:27:04.671 "product_name": "passthru", 00:27:04.671 "block_size": 512, 00:27:04.671 "num_blocks": 65536, 00:27:04.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.671 "assigned_rate_limits": { 00:27:04.671 "rw_ios_per_sec": 0, 00:27:04.671 "rw_mbytes_per_sec": 0, 00:27:04.671 "r_mbytes_per_sec": 0, 00:27:04.671 "w_mbytes_per_sec": 0 00:27:04.671 }, 00:27:04.671 "claimed": true, 00:27:04.671 "claim_type": "exclusive_write", 00:27:04.671 "zoned": false, 00:27:04.671 "supported_io_types": { 00:27:04.671 "read": true, 00:27:04.671 "write": true, 00:27:04.671 "unmap": true, 00:27:04.671 "flush": true, 00:27:04.671 "reset": true, 00:27:04.671 "nvme_admin": false, 00:27:04.671 "nvme_io": false, 00:27:04.671 "nvme_io_md": false, 00:27:04.671 "write_zeroes": true, 00:27:04.671 "zcopy": true, 00:27:04.671 "get_zone_info": false, 00:27:04.671 "zone_management": false, 00:27:04.671 "zone_append": false, 00:27:04.671 "compare": false, 00:27:04.671 "compare_and_write": false, 00:27:04.671 "abort": true, 00:27:04.671 "seek_hole": false, 00:27:04.671 "seek_data": false, 00:27:04.671 "copy": true, 00:27:04.671 "nvme_iov_md": false 00:27:04.671 }, 00:27:04.671 "memory_domains": [ 00:27:04.671 { 00:27:04.671 "dma_device_id": "system", 00:27:04.671 "dma_device_type": 1 00:27:04.671 }, 00:27:04.671 { 00:27:04.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.671 "dma_device_type": 2 00:27:04.671 } 00:27:04.671 ], 00:27:04.671 "driver_specific": { 00:27:04.671 "passthru": { 00:27:04.671 "name": "pt2", 00:27:04.671 "base_bdev_name": "malloc2" 00:27:04.671 } 00:27:04.671 } 00:27:04.671 }' 00:27:04.671 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.671 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.671 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:04.671 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.930 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.930 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:04.930 08:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.930 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.930 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:04.930 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.188 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.188 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:05.188 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:05.188 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:05.188 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:05.448 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:05.448 "name": "pt3", 00:27:05.448 "aliases": [ 00:27:05.448 "00000000-0000-0000-0000-000000000003" 00:27:05.448 ], 00:27:05.448 "product_name": "passthru", 00:27:05.448 "block_size": 512, 00:27:05.448 "num_blocks": 65536, 00:27:05.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:05.448 "assigned_rate_limits": { 00:27:05.448 "rw_ios_per_sec": 0, 00:27:05.448 "rw_mbytes_per_sec": 0, 00:27:05.448 "r_mbytes_per_sec": 0, 00:27:05.448 "w_mbytes_per_sec": 0 00:27:05.448 }, 00:27:05.448 "claimed": true, 00:27:05.448 "claim_type": "exclusive_write", 00:27:05.448 "zoned": false, 00:27:05.448 "supported_io_types": { 00:27:05.448 "read": true, 00:27:05.448 "write": true, 00:27:05.448 "unmap": true, 00:27:05.448 "flush": true, 00:27:05.448 "reset": true, 00:27:05.448 "nvme_admin": false, 00:27:05.448 "nvme_io": false, 00:27:05.448 "nvme_io_md": false, 00:27:05.448 "write_zeroes": true, 00:27:05.448 "zcopy": true, 00:27:05.448 "get_zone_info": false, 00:27:05.448 "zone_management": false, 00:27:05.448 "zone_append": false, 00:27:05.448 "compare": false, 00:27:05.448 "compare_and_write": false, 00:27:05.448 "abort": true, 00:27:05.448 "seek_hole": false, 00:27:05.448 "seek_data": false, 00:27:05.448 "copy": true, 00:27:05.448 "nvme_iov_md": false 00:27:05.448 }, 00:27:05.448 "memory_domains": [ 00:27:05.448 { 00:27:05.448 "dma_device_id": "system", 00:27:05.448 "dma_device_type": 1 00:27:05.448 }, 00:27:05.448 { 00:27:05.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.448 "dma_device_type": 2 00:27:05.448 } 00:27:05.448 ], 00:27:05.448 "driver_specific": { 00:27:05.448 "passthru": { 00:27:05.448 "name": "pt3", 00:27:05.448 "base_bdev_name": "malloc3" 00:27:05.448 } 00:27:05.448 } 00:27:05.448 }' 00:27:05.448 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:05.448 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:05.448 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:05.448 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:05.448 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:05.707 08:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:06.275 "name": "pt4", 00:27:06.275 "aliases": [ 00:27:06.275 "00000000-0000-0000-0000-000000000004" 00:27:06.275 ], 00:27:06.275 "product_name": "passthru", 00:27:06.275 "block_size": 512, 00:27:06.275 "num_blocks": 65536, 00:27:06.275 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:06.275 "assigned_rate_limits": { 00:27:06.275 "rw_ios_per_sec": 0, 00:27:06.275 "rw_mbytes_per_sec": 0, 00:27:06.275 "r_mbytes_per_sec": 0, 00:27:06.275 "w_mbytes_per_sec": 0 00:27:06.275 }, 00:27:06.275 "claimed": true, 00:27:06.275 "claim_type": "exclusive_write", 00:27:06.275 "zoned": false, 00:27:06.275 "supported_io_types": { 00:27:06.275 "read": true, 00:27:06.275 "write": true, 00:27:06.275 "unmap": true, 00:27:06.275 "flush": true, 00:27:06.275 "reset": true, 00:27:06.275 "nvme_admin": false, 00:27:06.275 "nvme_io": false, 00:27:06.275 "nvme_io_md": false, 00:27:06.275 "write_zeroes": true, 00:27:06.275 "zcopy": true, 00:27:06.275 "get_zone_info": false, 00:27:06.275 "zone_management": false, 00:27:06.275 "zone_append": false, 00:27:06.275 "compare": false, 00:27:06.275 "compare_and_write": false, 00:27:06.275 "abort": true, 00:27:06.275 "seek_hole": false, 00:27:06.275 "seek_data": false, 00:27:06.275 "copy": true, 00:27:06.275 "nvme_iov_md": false 00:27:06.275 }, 00:27:06.275 "memory_domains": [ 00:27:06.275 { 00:27:06.275 "dma_device_id": "system", 00:27:06.275 "dma_device_type": 1 00:27:06.275 }, 00:27:06.275 { 00:27:06.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.275 "dma_device_type": 2 00:27:06.275 } 00:27:06.275 ], 00:27:06.275 "driver_specific": { 00:27:06.275 "passthru": { 00:27:06.275 "name": "pt4", 00:27:06.275 "base_bdev_name": "malloc4" 00:27:06.275 } 00:27:06.275 } 00:27:06.275 }' 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.275 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:06.534 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:06.534 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.534 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:06.534 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:06.534 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:06.534 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:27:06.792 [2024-07-12 08:53:41.874042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:06.792 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0754aed9-5075-495d-b08f-c05f853f2b85 00:27:06.792 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0754aed9-5075-495d-b08f-c05f853f2b85 ']' 00:27:06.792 08:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:07.050 [2024-07-12 08:53:42.161743] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:07.050 [2024-07-12 08:53:42.161793] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:07.051 [2024-07-12 08:53:42.161905] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:07.051 [2024-07-12 08:53:42.161991] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:07.051 [2024-07-12 08:53:42.162003] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:27:07.051 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.051 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:27:07.309 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:27:07.309 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:27:07.309 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:07.309 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:07.568 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:07.568 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:07.827 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:07.827 08:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:08.085 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:27:08.085 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:08.343 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:08.343 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:08.601 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:27:08.601 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:08.602 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:27:08.860 [2024-07-12 08:53:43.934078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:08.860 [2024-07-12 08:53:43.936208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:08.860 [2024-07-12 08:53:43.936333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:08.860 [2024-07-12 08:53:43.936384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:08.860 [2024-07-12 08:53:43.936464] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:08.860 [2024-07-12 08:53:43.937106] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:08.860 [2024-07-12 08:53:43.937343] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:08.860 [2024-07-12 08:53:43.937513] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:08.860 [2024-07-12 08:53:43.937664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:08.860 [2024-07-12 08:53:43.937693] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:27:08.860 request: 00:27:08.860 { 00:27:08.860 "name": "raid_bdev1", 00:27:08.860 "raid_level": "concat", 00:27:08.860 "base_bdevs": [ 00:27:08.860 "malloc1", 00:27:08.860 "malloc2", 00:27:08.860 "malloc3", 00:27:08.860 "malloc4" 00:27:08.860 ], 00:27:08.860 "strip_size_kb": 64, 00:27:08.860 "superblock": false, 00:27:08.860 "method": "bdev_raid_create", 00:27:08.860 "req_id": 1 00:27:08.860 } 00:27:08.860 Got JSON-RPC error response 00:27:08.860 response: 00:27:08.860 { 00:27:08.860 "code": -17, 00:27:08.861 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:08.861 } 00:27:08.861 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:27:08.861 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:08.861 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:08.861 08:53:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:08.861 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.861 08:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:27:09.120 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:27:09.120 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:27:09.120 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:09.379 [2024-07-12 08:53:44.392640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:09.379 [2024-07-12 08:53:44.392799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.379 [2024-07-12 08:53:44.392840] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:09.379 [2024-07-12 08:53:44.392894] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.379 [2024-07-12 08:53:44.395473] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.379 [2024-07-12 08:53:44.395544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:09.379 [2024-07-12 08:53:44.395690] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:09.379 [2024-07-12 08:53:44.395769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:09.379 pt1 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.379 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.638 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:09.638 "name": "raid_bdev1", 00:27:09.638 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:09.638 "strip_size_kb": 64, 00:27:09.638 "state": "configuring", 00:27:09.638 "raid_level": "concat", 00:27:09.638 "superblock": true, 00:27:09.638 "num_base_bdevs": 4, 00:27:09.638 "num_base_bdevs_discovered": 1, 00:27:09.638 "num_base_bdevs_operational": 4, 00:27:09.638 "base_bdevs_list": [ 00:27:09.638 { 00:27:09.638 "name": "pt1", 00:27:09.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:09.638 "is_configured": true, 00:27:09.638 "data_offset": 2048, 00:27:09.638 "data_size": 63488 00:27:09.638 }, 00:27:09.638 { 00:27:09.638 "name": null, 00:27:09.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:09.638 "is_configured": false, 00:27:09.638 "data_offset": 2048, 00:27:09.638 "data_size": 63488 00:27:09.638 }, 00:27:09.638 { 00:27:09.638 "name": null, 00:27:09.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:09.638 "is_configured": false, 00:27:09.638 "data_offset": 2048, 00:27:09.638 "data_size": 63488 00:27:09.638 }, 00:27:09.638 { 00:27:09.638 "name": null, 00:27:09.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:09.638 "is_configured": false, 00:27:09.638 "data_offset": 2048, 00:27:09.638 "data_size": 63488 00:27:09.638 } 00:27:09.638 ] 00:27:09.638 }' 00:27:09.638 08:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:09.638 08:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.205 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:27:10.205 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:10.464 [2024-07-12 08:53:45.604992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:10.464 [2024-07-12 08:53:45.605117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.464 [2024-07-12 08:53:45.605172] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:10.464 [2024-07-12 08:53:45.605221] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.464 [2024-07-12 08:53:45.605792] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.464 [2024-07-12 08:53:45.605826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:10.464 [2024-07-12 08:53:45.605945] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:10.464 [2024-07-12 08:53:45.605985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:10.464 pt2 00:27:10.464 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:10.722 [2024-07-12 08:53:45.909262] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.981 08:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.240 08:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.240 "name": "raid_bdev1", 00:27:11.240 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:11.240 "strip_size_kb": 64, 00:27:11.240 "state": "configuring", 00:27:11.240 "raid_level": "concat", 00:27:11.240 "superblock": true, 00:27:11.240 "num_base_bdevs": 4, 00:27:11.240 "num_base_bdevs_discovered": 1, 00:27:11.240 "num_base_bdevs_operational": 4, 00:27:11.240 "base_bdevs_list": [ 00:27:11.240 { 00:27:11.240 "name": "pt1", 00:27:11.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:11.240 "is_configured": true, 00:27:11.240 "data_offset": 2048, 00:27:11.240 "data_size": 63488 00:27:11.240 }, 00:27:11.240 { 00:27:11.240 "name": null, 00:27:11.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:11.240 "is_configured": false, 00:27:11.240 "data_offset": 2048, 00:27:11.240 "data_size": 63488 00:27:11.240 }, 00:27:11.240 { 00:27:11.240 "name": null, 00:27:11.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:11.240 "is_configured": false, 00:27:11.240 "data_offset": 2048, 00:27:11.240 "data_size": 63488 00:27:11.240 }, 00:27:11.240 { 00:27:11.240 "name": null, 00:27:11.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:11.240 "is_configured": false, 00:27:11.240 "data_offset": 2048, 00:27:11.240 "data_size": 63488 00:27:11.240 } 00:27:11.240 ] 00:27:11.240 }' 00:27:11.240 08:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.240 08:53:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.808 08:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:27:11.808 08:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:11.808 08:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:12.067 [2024-07-12 08:53:47.197541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:12.067 [2024-07-12 08:53:47.197643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.067 [2024-07-12 08:53:47.197689] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:12.067 [2024-07-12 08:53:47.197740] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.067 [2024-07-12 08:53:47.198288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.067 [2024-07-12 08:53:47.198339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:12.067 [2024-07-12 08:53:47.198484] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:12.067 [2024-07-12 08:53:47.198518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:12.067 pt2 00:27:12.067 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:12.067 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:12.067 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:12.325 [2024-07-12 08:53:47.473609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:12.325 [2024-07-12 08:53:47.473731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.325 [2024-07-12 08:53:47.473768] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:12.325 [2024-07-12 08:53:47.473817] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.325 [2024-07-12 08:53:47.474365] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.325 [2024-07-12 08:53:47.474411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:12.325 [2024-07-12 08:53:47.474531] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:12.325 [2024-07-12 08:53:47.474561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:12.325 pt3 00:27:12.325 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:12.325 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:12.325 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:12.584 [2024-07-12 08:53:47.749654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:12.584 [2024-07-12 08:53:47.749802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.584 [2024-07-12 08:53:47.749841] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:27:12.584 [2024-07-12 08:53:47.749910] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.584 [2024-07-12 08:53:47.750476] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.584 [2024-07-12 08:53:47.750526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:12.584 [2024-07-12 08:53:47.750646] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:12.584 [2024-07-12 08:53:47.750689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:12.584 [2024-07-12 08:53:47.750856] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:27:12.584 [2024-07-12 08:53:47.750880] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:12.584 [2024-07-12 08:53:47.750992] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:12.584 [2024-07-12 08:53:47.751355] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:27:12.584 [2024-07-12 08:53:47.751380] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:27:12.584 [2024-07-12 08:53:47.751523] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.584 pt4 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.584 08:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.843 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.843 "name": "raid_bdev1", 00:27:12.843 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:12.843 "strip_size_kb": 64, 00:27:12.843 "state": "online", 00:27:12.843 "raid_level": "concat", 00:27:12.843 "superblock": true, 00:27:12.843 "num_base_bdevs": 4, 00:27:12.843 "num_base_bdevs_discovered": 4, 00:27:12.843 "num_base_bdevs_operational": 4, 00:27:12.843 "base_bdevs_list": [ 00:27:12.843 { 00:27:12.843 "name": "pt1", 00:27:12.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:12.843 "is_configured": true, 00:27:12.843 "data_offset": 2048, 00:27:12.843 "data_size": 63488 00:27:12.843 }, 00:27:12.843 { 00:27:12.843 "name": "pt2", 00:27:12.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:12.843 "is_configured": true, 00:27:12.843 "data_offset": 2048, 00:27:12.843 "data_size": 63488 00:27:12.843 }, 00:27:12.843 { 00:27:12.843 "name": "pt3", 00:27:12.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:12.843 "is_configured": true, 00:27:12.843 "data_offset": 2048, 00:27:12.843 "data_size": 63488 00:27:12.843 }, 00:27:12.843 { 00:27:12.843 "name": "pt4", 00:27:12.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:12.843 "is_configured": true, 00:27:12.843 "data_offset": 2048, 00:27:12.843 "data_size": 63488 00:27:12.843 } 00:27:12.843 ] 00:27:12.843 }' 00:27:12.843 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.843 08:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:13.780 [2024-07-12 08:53:48.954276] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.780 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:13.780 "name": "raid_bdev1", 00:27:13.780 "aliases": [ 00:27:13.780 "0754aed9-5075-495d-b08f-c05f853f2b85" 00:27:13.780 ], 00:27:13.780 "product_name": "Raid Volume", 00:27:13.780 "block_size": 512, 00:27:13.780 "num_blocks": 253952, 00:27:13.780 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:13.780 "assigned_rate_limits": { 00:27:13.780 "rw_ios_per_sec": 0, 00:27:13.780 "rw_mbytes_per_sec": 0, 00:27:13.780 "r_mbytes_per_sec": 0, 00:27:13.780 "w_mbytes_per_sec": 0 00:27:13.780 }, 00:27:13.780 "claimed": false, 00:27:13.780 "zoned": false, 00:27:13.780 "supported_io_types": { 00:27:13.780 "read": true, 00:27:13.780 "write": true, 00:27:13.780 "unmap": true, 00:27:13.780 "flush": true, 00:27:13.780 "reset": true, 00:27:13.780 "nvme_admin": false, 00:27:13.780 "nvme_io": false, 00:27:13.780 "nvme_io_md": false, 00:27:13.780 "write_zeroes": true, 00:27:13.780 "zcopy": false, 00:27:13.780 "get_zone_info": false, 00:27:13.780 "zone_management": false, 00:27:13.780 "zone_append": false, 00:27:13.780 "compare": false, 00:27:13.780 "compare_and_write": false, 00:27:13.780 "abort": false, 00:27:13.780 "seek_hole": false, 00:27:13.780 "seek_data": false, 00:27:13.780 "copy": false, 00:27:13.780 "nvme_iov_md": false 00:27:13.780 }, 00:27:13.780 "memory_domains": [ 00:27:13.780 { 00:27:13.780 "dma_device_id": "system", 00:27:13.780 "dma_device_type": 1 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.781 "dma_device_type": 2 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "dma_device_id": "system", 00:27:13.781 "dma_device_type": 1 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.781 "dma_device_type": 2 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "dma_device_id": "system", 00:27:13.781 "dma_device_type": 1 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.781 "dma_device_type": 2 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "dma_device_id": "system", 00:27:13.781 "dma_device_type": 1 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:13.781 "dma_device_type": 2 00:27:13.781 } 00:27:13.781 ], 00:27:13.781 "driver_specific": { 00:27:13.781 "raid": { 00:27:13.781 "uuid": "0754aed9-5075-495d-b08f-c05f853f2b85", 00:27:13.781 "strip_size_kb": 64, 00:27:13.781 "state": "online", 00:27:13.781 "raid_level": "concat", 00:27:13.781 "superblock": true, 00:27:13.781 "num_base_bdevs": 4, 00:27:13.781 "num_base_bdevs_discovered": 4, 00:27:13.781 "num_base_bdevs_operational": 4, 00:27:13.781 "base_bdevs_list": [ 00:27:13.781 { 00:27:13.781 "name": "pt1", 00:27:13.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:13.781 "is_configured": true, 00:27:13.781 "data_offset": 2048, 00:27:13.781 "data_size": 63488 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "name": "pt2", 00:27:13.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.781 "is_configured": true, 00:27:13.781 "data_offset": 2048, 00:27:13.781 "data_size": 63488 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "name": "pt3", 00:27:13.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:13.781 "is_configured": true, 00:27:13.781 "data_offset": 2048, 00:27:13.781 "data_size": 63488 00:27:13.781 }, 00:27:13.781 { 00:27:13.781 "name": "pt4", 00:27:13.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:13.781 "is_configured": true, 00:27:13.781 "data_offset": 2048, 00:27:13.781 "data_size": 63488 00:27:13.781 } 00:27:13.781 ] 00:27:13.781 } 00:27:13.781 } 00:27:13.781 }' 00:27:13.781 08:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:14.040 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:14.040 pt2 00:27:14.040 pt3 00:27:14.040 pt4' 00:27:14.040 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:14.040 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:14.040 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:14.299 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:14.299 "name": "pt1", 00:27:14.299 "aliases": [ 00:27:14.299 "00000000-0000-0000-0000-000000000001" 00:27:14.299 ], 00:27:14.299 "product_name": "passthru", 00:27:14.299 "block_size": 512, 00:27:14.299 "num_blocks": 65536, 00:27:14.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:14.299 "assigned_rate_limits": { 00:27:14.299 "rw_ios_per_sec": 0, 00:27:14.299 "rw_mbytes_per_sec": 0, 00:27:14.299 "r_mbytes_per_sec": 0, 00:27:14.299 "w_mbytes_per_sec": 0 00:27:14.299 }, 00:27:14.299 "claimed": true, 00:27:14.299 "claim_type": "exclusive_write", 00:27:14.299 "zoned": false, 00:27:14.299 "supported_io_types": { 00:27:14.299 "read": true, 00:27:14.300 "write": true, 00:27:14.300 "unmap": true, 00:27:14.300 "flush": true, 00:27:14.300 "reset": true, 00:27:14.300 "nvme_admin": false, 00:27:14.300 "nvme_io": false, 00:27:14.300 "nvme_io_md": false, 00:27:14.300 "write_zeroes": true, 00:27:14.300 "zcopy": true, 00:27:14.300 "get_zone_info": false, 00:27:14.300 "zone_management": false, 00:27:14.300 "zone_append": false, 00:27:14.300 "compare": false, 00:27:14.300 "compare_and_write": false, 00:27:14.300 "abort": true, 00:27:14.300 "seek_hole": false, 00:27:14.300 "seek_data": false, 00:27:14.300 "copy": true, 00:27:14.300 "nvme_iov_md": false 00:27:14.300 }, 00:27:14.300 "memory_domains": [ 00:27:14.300 { 00:27:14.300 "dma_device_id": "system", 00:27:14.300 "dma_device_type": 1 00:27:14.300 }, 00:27:14.300 { 00:27:14.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.300 "dma_device_type": 2 00:27:14.300 } 00:27:14.300 ], 00:27:14.300 "driver_specific": { 00:27:14.300 "passthru": { 00:27:14.300 "name": "pt1", 00:27:14.300 "base_bdev_name": "malloc1" 00:27:14.300 } 00:27:14.300 } 00:27:14.300 }' 00:27:14.300 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:14.300 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:14.300 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:14.300 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:14.300 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:14.558 08:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:15.124 "name": "pt2", 00:27:15.124 "aliases": [ 00:27:15.124 "00000000-0000-0000-0000-000000000002" 00:27:15.124 ], 00:27:15.124 "product_name": "passthru", 00:27:15.124 "block_size": 512, 00:27:15.124 "num_blocks": 65536, 00:27:15.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:15.124 "assigned_rate_limits": { 00:27:15.124 "rw_ios_per_sec": 0, 00:27:15.124 "rw_mbytes_per_sec": 0, 00:27:15.124 "r_mbytes_per_sec": 0, 00:27:15.124 "w_mbytes_per_sec": 0 00:27:15.124 }, 00:27:15.124 "claimed": true, 00:27:15.124 "claim_type": "exclusive_write", 00:27:15.124 "zoned": false, 00:27:15.124 "supported_io_types": { 00:27:15.124 "read": true, 00:27:15.124 "write": true, 00:27:15.124 "unmap": true, 00:27:15.124 "flush": true, 00:27:15.124 "reset": true, 00:27:15.124 "nvme_admin": false, 00:27:15.124 "nvme_io": false, 00:27:15.124 "nvme_io_md": false, 00:27:15.124 "write_zeroes": true, 00:27:15.124 "zcopy": true, 00:27:15.124 "get_zone_info": false, 00:27:15.124 "zone_management": false, 00:27:15.124 "zone_append": false, 00:27:15.124 "compare": false, 00:27:15.124 "compare_and_write": false, 00:27:15.124 "abort": true, 00:27:15.124 "seek_hole": false, 00:27:15.124 "seek_data": false, 00:27:15.124 "copy": true, 00:27:15.124 "nvme_iov_md": false 00:27:15.124 }, 00:27:15.124 "memory_domains": [ 00:27:15.124 { 00:27:15.124 "dma_device_id": "system", 00:27:15.124 "dma_device_type": 1 00:27:15.124 }, 00:27:15.124 { 00:27:15.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.124 "dma_device_type": 2 00:27:15.124 } 00:27:15.124 ], 00:27:15.124 "driver_specific": { 00:27:15.124 "passthru": { 00:27:15.124 "name": "pt2", 00:27:15.124 "base_bdev_name": "malloc2" 00:27:15.124 } 00:27:15.124 } 00:27:15.124 }' 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.124 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.383 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:15.383 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:15.383 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:15.383 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:15.383 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:15.383 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:15.383 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:15.641 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:15.641 "name": "pt3", 00:27:15.641 "aliases": [ 00:27:15.641 "00000000-0000-0000-0000-000000000003" 00:27:15.641 ], 00:27:15.641 "product_name": "passthru", 00:27:15.641 "block_size": 512, 00:27:15.641 "num_blocks": 65536, 00:27:15.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:15.641 "assigned_rate_limits": { 00:27:15.641 "rw_ios_per_sec": 0, 00:27:15.641 "rw_mbytes_per_sec": 0, 00:27:15.641 "r_mbytes_per_sec": 0, 00:27:15.641 "w_mbytes_per_sec": 0 00:27:15.641 }, 00:27:15.641 "claimed": true, 00:27:15.641 "claim_type": "exclusive_write", 00:27:15.641 "zoned": false, 00:27:15.641 "supported_io_types": { 00:27:15.641 "read": true, 00:27:15.641 "write": true, 00:27:15.641 "unmap": true, 00:27:15.641 "flush": true, 00:27:15.641 "reset": true, 00:27:15.641 "nvme_admin": false, 00:27:15.641 "nvme_io": false, 00:27:15.641 "nvme_io_md": false, 00:27:15.641 "write_zeroes": true, 00:27:15.641 "zcopy": true, 00:27:15.641 "get_zone_info": false, 00:27:15.641 "zone_management": false, 00:27:15.641 "zone_append": false, 00:27:15.641 "compare": false, 00:27:15.641 "compare_and_write": false, 00:27:15.641 "abort": true, 00:27:15.641 "seek_hole": false, 00:27:15.641 "seek_data": false, 00:27:15.641 "copy": true, 00:27:15.641 "nvme_iov_md": false 00:27:15.641 }, 00:27:15.641 "memory_domains": [ 00:27:15.641 { 00:27:15.641 "dma_device_id": "system", 00:27:15.641 "dma_device_type": 1 00:27:15.641 }, 00:27:15.641 { 00:27:15.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.641 "dma_device_type": 2 00:27:15.641 } 00:27:15.641 ], 00:27:15.641 "driver_specific": { 00:27:15.641 "passthru": { 00:27:15.641 "name": "pt3", 00:27:15.641 "base_bdev_name": "malloc3" 00:27:15.641 } 00:27:15.641 } 00:27:15.641 }' 00:27:15.641 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:15.641 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:15.641 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:15.641 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.911 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.911 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:15.911 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.911 08:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.911 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:15.911 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:16.173 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:16.173 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:16.173 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:16.173 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:16.173 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:16.431 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:16.431 "name": "pt4", 00:27:16.431 "aliases": [ 00:27:16.431 "00000000-0000-0000-0000-000000000004" 00:27:16.431 ], 00:27:16.431 "product_name": "passthru", 00:27:16.431 "block_size": 512, 00:27:16.431 "num_blocks": 65536, 00:27:16.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:16.431 "assigned_rate_limits": { 00:27:16.431 "rw_ios_per_sec": 0, 00:27:16.431 "rw_mbytes_per_sec": 0, 00:27:16.431 "r_mbytes_per_sec": 0, 00:27:16.431 "w_mbytes_per_sec": 0 00:27:16.431 }, 00:27:16.431 "claimed": true, 00:27:16.431 "claim_type": "exclusive_write", 00:27:16.431 "zoned": false, 00:27:16.431 "supported_io_types": { 00:27:16.431 "read": true, 00:27:16.431 "write": true, 00:27:16.431 "unmap": true, 00:27:16.431 "flush": true, 00:27:16.431 "reset": true, 00:27:16.431 "nvme_admin": false, 00:27:16.431 "nvme_io": false, 00:27:16.431 "nvme_io_md": false, 00:27:16.431 "write_zeroes": true, 00:27:16.431 "zcopy": true, 00:27:16.431 "get_zone_info": false, 00:27:16.431 "zone_management": false, 00:27:16.431 "zone_append": false, 00:27:16.431 "compare": false, 00:27:16.431 "compare_and_write": false, 00:27:16.431 "abort": true, 00:27:16.431 "seek_hole": false, 00:27:16.431 "seek_data": false, 00:27:16.431 "copy": true, 00:27:16.431 "nvme_iov_md": false 00:27:16.431 }, 00:27:16.431 "memory_domains": [ 00:27:16.431 { 00:27:16.431 "dma_device_id": "system", 00:27:16.431 "dma_device_type": 1 00:27:16.431 }, 00:27:16.431 { 00:27:16.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:16.431 "dma_device_type": 2 00:27:16.431 } 00:27:16.431 ], 00:27:16.431 "driver_specific": { 00:27:16.431 "passthru": { 00:27:16.431 "name": "pt4", 00:27:16.431 "base_bdev_name": "malloc4" 00:27:16.431 } 00:27:16.431 } 00:27:16.431 }' 00:27:16.431 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.431 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:16.431 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:16.431 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:16.431 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:16.690 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:16.690 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:16.690 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:16.690 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:16.690 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:16.690 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:16.690 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:16.948 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:16.948 08:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:27:16.948 [2024-07-12 08:53:52.139008] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0754aed9-5075-495d-b08f-c05f853f2b85 '!=' 0754aed9-5075-495d-b08f-c05f853f2b85 ']' 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 141388 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 141388 ']' 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 141388 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141388 00:27:17.207 killing process with pid 141388 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141388' 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 141388 00:27:17.207 08:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 141388 00:27:17.207 [2024-07-12 08:53:52.175941] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:17.207 [2024-07-12 08:53:52.176029] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:17.207 [2024-07-12 08:53:52.176135] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:17.207 [2024-07-12 08:53:52.176146] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:27:17.465 [2024-07-12 08:53:52.487336] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:18.399 ************************************ 00:27:18.399 END TEST raid_superblock_test 00:27:18.399 ************************************ 00:27:18.399 08:53:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:27:18.399 00:27:18.399 real 0m19.522s 00:27:18.399 user 0m35.718s 00:27:18.399 sys 0m2.137s 00:27:18.399 08:53:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.399 08:53:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.399 08:53:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:18.399 08:53:53 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:27:18.399 08:53:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:18.399 08:53:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.399 08:53:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:18.657 ************************************ 00:27:18.657 START TEST raid_read_error_test 00:27:18.657 ************************************ 00:27:18.657 08:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:27:18.657 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:27:18.657 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:18.657 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.UEoTHdjIfz 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=141996 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 141996 /var/tmp/spdk-raid.sock 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 141996 ']' 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:18.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.658 08:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.658 [2024-07-12 08:53:53.683398] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:27:18.658 [2024-07-12 08:53:53.683630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141996 ] 00:27:18.916 [2024-07-12 08:53:53.858283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.916 [2024-07-12 08:53:54.101351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.175 [2024-07-12 08:53:54.294315] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:19.742 08:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:19.743 08:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:19.743 08:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:19.743 08:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:19.743 BaseBdev1_malloc 00:27:19.743 08:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:20.001 true 00:27:20.001 08:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:20.259 [2024-07-12 08:53:55.403757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:20.259 [2024-07-12 08:53:55.403894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.259 [2024-07-12 08:53:55.403941] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:20.259 [2024-07-12 08:53:55.403980] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.259 [2024-07-12 08:53:55.406595] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.259 [2024-07-12 08:53:55.406645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:20.259 BaseBdev1 00:27:20.259 08:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:20.259 08:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:20.517 BaseBdev2_malloc 00:27:20.774 08:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:20.774 true 00:27:20.774 08:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:21.033 [2024-07-12 08:53:56.184636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:21.033 [2024-07-12 08:53:56.184804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.033 [2024-07-12 08:53:56.184854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:21.033 [2024-07-12 08:53:56.184878] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.033 [2024-07-12 08:53:56.187390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.033 [2024-07-12 08:53:56.187463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:21.033 BaseBdev2 00:27:21.033 08:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:21.033 08:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:21.600 BaseBdev3_malloc 00:27:21.600 08:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:21.600 true 00:27:21.858 08:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:21.858 [2024-07-12 08:53:57.022575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:21.858 [2024-07-12 08:53:57.022758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.858 [2024-07-12 08:53:57.022839] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:21.858 [2024-07-12 08:53:57.022882] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.858 [2024-07-12 08:53:57.025910] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.858 [2024-07-12 08:53:57.025973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:21.858 BaseBdev3 00:27:21.858 08:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:21.858 08:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:22.424 BaseBdev4_malloc 00:27:22.424 08:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:22.424 true 00:27:22.424 08:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:22.681 [2024-07-12 08:53:57.782197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:22.681 [2024-07-12 08:53:57.782363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.681 [2024-07-12 08:53:57.782425] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:22.681 [2024-07-12 08:53:57.782456] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.681 [2024-07-12 08:53:57.785416] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.681 [2024-07-12 08:53:57.785501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:22.681 BaseBdev4 00:27:22.681 08:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:22.939 [2024-07-12 08:53:58.010505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.939 [2024-07-12 08:53:58.013129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:22.939 [2024-07-12 08:53:58.013248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:22.939 [2024-07-12 08:53:58.013362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:22.939 [2024-07-12 08:53:58.013690] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:22.939 [2024-07-12 08:53:58.013706] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:22.939 [2024-07-12 08:53:58.013889] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:22.939 [2024-07-12 08:53:58.014367] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:22.939 [2024-07-12 08:53:58.014409] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:22.939 [2024-07-12 08:53:58.014682] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.939 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.198 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:23.198 "name": "raid_bdev1", 00:27:23.198 "uuid": "e66a66b8-6065-4342-a9f6-26b750a269c1", 00:27:23.198 "strip_size_kb": 64, 00:27:23.198 "state": "online", 00:27:23.198 "raid_level": "concat", 00:27:23.198 "superblock": true, 00:27:23.198 "num_base_bdevs": 4, 00:27:23.198 "num_base_bdevs_discovered": 4, 00:27:23.198 "num_base_bdevs_operational": 4, 00:27:23.198 "base_bdevs_list": [ 00:27:23.198 { 00:27:23.198 "name": "BaseBdev1", 00:27:23.198 "uuid": "18219f65-1fad-5213-96c6-3121b45eed4c", 00:27:23.198 "is_configured": true, 00:27:23.198 "data_offset": 2048, 00:27:23.198 "data_size": 63488 00:27:23.198 }, 00:27:23.198 { 00:27:23.198 "name": "BaseBdev2", 00:27:23.198 "uuid": "9cf646aa-3a70-507e-b17c-1681d608d68f", 00:27:23.198 "is_configured": true, 00:27:23.198 "data_offset": 2048, 00:27:23.198 "data_size": 63488 00:27:23.198 }, 00:27:23.198 { 00:27:23.198 "name": "BaseBdev3", 00:27:23.198 "uuid": "8f1465e3-5144-5915-a49a-b7fc7d560852", 00:27:23.198 "is_configured": true, 00:27:23.198 "data_offset": 2048, 00:27:23.198 "data_size": 63488 00:27:23.198 }, 00:27:23.198 { 00:27:23.198 "name": "BaseBdev4", 00:27:23.198 "uuid": "8d2ed107-4e2f-568f-b22b-d41c8277c23e", 00:27:23.198 "is_configured": true, 00:27:23.198 "data_offset": 2048, 00:27:23.198 "data_size": 63488 00:27:23.198 } 00:27:23.198 ] 00:27:23.198 }' 00:27:23.198 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:23.198 08:53:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:24.132 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:24.132 08:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:24.132 [2024-07-12 08:53:59.068217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:25.066 08:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.066 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.325 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.325 "name": "raid_bdev1", 00:27:25.325 "uuid": "e66a66b8-6065-4342-a9f6-26b750a269c1", 00:27:25.325 "strip_size_kb": 64, 00:27:25.325 "state": "online", 00:27:25.325 "raid_level": "concat", 00:27:25.325 "superblock": true, 00:27:25.325 "num_base_bdevs": 4, 00:27:25.325 "num_base_bdevs_discovered": 4, 00:27:25.325 "num_base_bdevs_operational": 4, 00:27:25.325 "base_bdevs_list": [ 00:27:25.325 { 00:27:25.325 "name": "BaseBdev1", 00:27:25.325 "uuid": "18219f65-1fad-5213-96c6-3121b45eed4c", 00:27:25.325 "is_configured": true, 00:27:25.325 "data_offset": 2048, 00:27:25.325 "data_size": 63488 00:27:25.325 }, 00:27:25.325 { 00:27:25.325 "name": "BaseBdev2", 00:27:25.325 "uuid": "9cf646aa-3a70-507e-b17c-1681d608d68f", 00:27:25.325 "is_configured": true, 00:27:25.325 "data_offset": 2048, 00:27:25.325 "data_size": 63488 00:27:25.325 }, 00:27:25.325 { 00:27:25.325 "name": "BaseBdev3", 00:27:25.325 "uuid": "8f1465e3-5144-5915-a49a-b7fc7d560852", 00:27:25.325 "is_configured": true, 00:27:25.325 "data_offset": 2048, 00:27:25.325 "data_size": 63488 00:27:25.325 }, 00:27:25.325 { 00:27:25.325 "name": "BaseBdev4", 00:27:25.325 "uuid": "8d2ed107-4e2f-568f-b22b-d41c8277c23e", 00:27:25.325 "is_configured": true, 00:27:25.325 "data_offset": 2048, 00:27:25.325 "data_size": 63488 00:27:25.325 } 00:27:25.325 ] 00:27:25.325 }' 00:27:25.325 08:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.325 08:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:26.260 [2024-07-12 08:54:01.394228] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:26.260 [2024-07-12 08:54:01.394282] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:26.260 [2024-07-12 08:54:01.397385] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.260 [2024-07-12 08:54:01.397460] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.260 [2024-07-12 08:54:01.397517] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.260 [2024-07-12 08:54:01.397529] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:26.260 0 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 141996 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 141996 ']' 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 141996 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141996 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:26.260 killing process with pid 141996 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141996' 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 141996 00:27:26.260 08:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 141996 00:27:26.260 [2024-07-12 08:54:01.432591] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:26.519 [2024-07-12 08:54:01.698889] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:27.896 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.UEoTHdjIfz 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:27:27.897 00:27:27.897 real 0m9.262s 00:27:27.897 user 0m14.459s 00:27:27.897 sys 0m1.071s 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.897 ************************************ 00:27:27.897 END TEST raid_read_error_test 00:27:27.897 ************************************ 00:27:27.897 08:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.897 08:54:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:27.897 08:54:02 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:27:27.897 08:54:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:27.897 08:54:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.897 08:54:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:27.897 ************************************ 00:27:27.897 START TEST raid_write_error_test 00:27:27.897 ************************************ 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.DOH2irLUVT 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=142228 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 142228 /var/tmp/spdk-raid.sock 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 142228 ']' 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.897 08:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.897 [2024-07-12 08:54:02.999536] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:27:27.897 [2024-07-12 08:54:02.999764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142228 ] 00:27:28.156 [2024-07-12 08:54:03.172150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.414 [2024-07-12 08:54:03.382107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.414 [2024-07-12 08:54:03.578606] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:28.983 08:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.983 08:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:28.983 08:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:28.983 08:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:29.243 BaseBdev1_malloc 00:27:29.243 08:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:29.502 true 00:27:29.502 08:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:29.762 [2024-07-12 08:54:04.731905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:29.762 [2024-07-12 08:54:04.732079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.762 [2024-07-12 08:54:04.732128] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:29.762 [2024-07-12 08:54:04.732153] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.762 [2024-07-12 08:54:04.734764] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.762 [2024-07-12 08:54:04.734842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:29.762 BaseBdev1 00:27:29.762 08:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:29.762 08:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:30.021 BaseBdev2_malloc 00:27:30.021 08:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:30.280 true 00:27:30.280 08:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:30.539 [2024-07-12 08:54:05.478195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:30.539 [2024-07-12 08:54:05.478367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.539 [2024-07-12 08:54:05.478418] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:30.539 [2024-07-12 08:54:05.478443] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.539 [2024-07-12 08:54:05.481089] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.539 [2024-07-12 08:54:05.481146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:30.539 BaseBdev2 00:27:30.539 08:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:30.539 08:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:30.799 BaseBdev3_malloc 00:27:30.799 08:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:30.799 true 00:27:30.799 08:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:31.058 [2024-07-12 08:54:06.216004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:31.058 [2024-07-12 08:54:06.216233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.058 [2024-07-12 08:54:06.216319] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:31.058 [2024-07-12 08:54:06.216362] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.058 [2024-07-12 08:54:06.219833] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.058 [2024-07-12 08:54:06.219909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:31.058 BaseBdev3 00:27:31.058 08:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:31.058 08:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:31.317 BaseBdev4_malloc 00:27:31.317 08:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:31.576 true 00:27:31.576 08:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:31.834 [2024-07-12 08:54:06.947102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:31.834 [2024-07-12 08:54:06.947263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.834 [2024-07-12 08:54:06.947317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:31.834 [2024-07-12 08:54:06.947353] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.834 [2024-07-12 08:54:06.950164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.834 [2024-07-12 08:54:06.950216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:31.834 BaseBdev4 00:27:31.834 08:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:32.093 [2024-07-12 08:54:07.171239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:32.093 [2024-07-12 08:54:07.173829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:32.093 [2024-07-12 08:54:07.173948] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:32.093 [2024-07-12 08:54:07.174046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:32.093 [2024-07-12 08:54:07.174409] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:32.093 [2024-07-12 08:54:07.174435] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:32.093 [2024-07-12 08:54:07.174604] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:32.093 [2024-07-12 08:54:07.175078] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:32.093 [2024-07-12 08:54:07.175103] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:32.093 [2024-07-12 08:54:07.175347] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.093 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.389 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.389 "name": "raid_bdev1", 00:27:32.389 "uuid": "82ce584d-903f-4475-a137-6b2053b4d920", 00:27:32.389 "strip_size_kb": 64, 00:27:32.389 "state": "online", 00:27:32.389 "raid_level": "concat", 00:27:32.389 "superblock": true, 00:27:32.389 "num_base_bdevs": 4, 00:27:32.389 "num_base_bdevs_discovered": 4, 00:27:32.389 "num_base_bdevs_operational": 4, 00:27:32.389 "base_bdevs_list": [ 00:27:32.389 { 00:27:32.389 "name": "BaseBdev1", 00:27:32.389 "uuid": "95e8ca6d-67d5-522d-b349-82275afad766", 00:27:32.389 "is_configured": true, 00:27:32.389 "data_offset": 2048, 00:27:32.389 "data_size": 63488 00:27:32.389 }, 00:27:32.389 { 00:27:32.389 "name": "BaseBdev2", 00:27:32.389 "uuid": "a170e853-6431-51ce-9da8-c20aa4f217e3", 00:27:32.389 "is_configured": true, 00:27:32.389 "data_offset": 2048, 00:27:32.389 "data_size": 63488 00:27:32.389 }, 00:27:32.389 { 00:27:32.389 "name": "BaseBdev3", 00:27:32.389 "uuid": "6c826772-9479-5f68-8dc0-434a4fe3cb09", 00:27:32.390 "is_configured": true, 00:27:32.390 "data_offset": 2048, 00:27:32.390 "data_size": 63488 00:27:32.390 }, 00:27:32.390 { 00:27:32.390 "name": "BaseBdev4", 00:27:32.390 "uuid": "7dcd47f3-26f4-5343-84e0-97061a2671bb", 00:27:32.390 "is_configured": true, 00:27:32.390 "data_offset": 2048, 00:27:32.390 "data_size": 63488 00:27:32.390 } 00:27:32.390 ] 00:27:32.390 }' 00:27:32.390 08:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.390 08:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.957 08:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:32.957 08:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:33.216 [2024-07-12 08:54:08.161119] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.153 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.412 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.412 "name": "raid_bdev1", 00:27:34.412 "uuid": "82ce584d-903f-4475-a137-6b2053b4d920", 00:27:34.412 "strip_size_kb": 64, 00:27:34.412 "state": "online", 00:27:34.412 "raid_level": "concat", 00:27:34.412 "superblock": true, 00:27:34.412 "num_base_bdevs": 4, 00:27:34.412 "num_base_bdevs_discovered": 4, 00:27:34.412 "num_base_bdevs_operational": 4, 00:27:34.412 "base_bdevs_list": [ 00:27:34.412 { 00:27:34.412 "name": "BaseBdev1", 00:27:34.412 "uuid": "95e8ca6d-67d5-522d-b349-82275afad766", 00:27:34.412 "is_configured": true, 00:27:34.412 "data_offset": 2048, 00:27:34.412 "data_size": 63488 00:27:34.412 }, 00:27:34.412 { 00:27:34.412 "name": "BaseBdev2", 00:27:34.412 "uuid": "a170e853-6431-51ce-9da8-c20aa4f217e3", 00:27:34.412 "is_configured": true, 00:27:34.412 "data_offset": 2048, 00:27:34.412 "data_size": 63488 00:27:34.412 }, 00:27:34.412 { 00:27:34.412 "name": "BaseBdev3", 00:27:34.412 "uuid": "6c826772-9479-5f68-8dc0-434a4fe3cb09", 00:27:34.412 "is_configured": true, 00:27:34.412 "data_offset": 2048, 00:27:34.412 "data_size": 63488 00:27:34.412 }, 00:27:34.412 { 00:27:34.412 "name": "BaseBdev4", 00:27:34.412 "uuid": "7dcd47f3-26f4-5343-84e0-97061a2671bb", 00:27:34.412 "is_configured": true, 00:27:34.412 "data_offset": 2048, 00:27:34.412 "data_size": 63488 00:27:34.412 } 00:27:34.412 ] 00:27:34.412 }' 00:27:34.412 08:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.412 08:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.348 08:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:35.348 [2024-07-12 08:54:10.502026] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:35.348 [2024-07-12 08:54:10.502093] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:35.348 [2024-07-12 08:54:10.505013] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:35.348 [2024-07-12 08:54:10.505133] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.348 [2024-07-12 08:54:10.505194] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:35.348 [2024-07-12 08:54:10.505207] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:35.348 0 00:27:35.348 08:54:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 142228 00:27:35.348 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 142228 ']' 00:27:35.348 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 142228 00:27:35.348 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:27:35.348 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:35.348 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142228 00:27:35.606 killing process with pid 142228 00:27:35.606 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:35.606 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:35.606 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142228' 00:27:35.606 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 142228 00:27:35.606 08:54:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 142228 00:27:35.606 [2024-07-12 08:54:10.543060] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:35.865 [2024-07-12 08:54:10.844070] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.DOH2irLUVT 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:37.243 ************************************ 00:27:37.243 END TEST raid_write_error_test 00:27:37.243 ************************************ 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:27:37.243 00:27:37.243 real 0m9.214s 00:27:37.243 user 0m14.294s 00:27:37.243 sys 0m0.969s 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.243 08:54:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.243 08:54:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:37.243 08:54:12 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:27:37.243 08:54:12 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:27:37.243 08:54:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:37.243 08:54:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.243 08:54:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:37.243 ************************************ 00:27:37.243 START TEST raid_state_function_test 00:27:37.243 ************************************ 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=142461 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 142461' 00:27:37.243 Process raid pid: 142461 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 142461 /var/tmp/spdk-raid.sock 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 142461 ']' 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.243 08:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.243 [2024-07-12 08:54:12.268655] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:27:37.243 [2024-07-12 08:54:12.268878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.502 [2024-07-12 08:54:12.440050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.760 [2024-07-12 08:54:12.696479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.760 [2024-07-12 08:54:12.920973] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:38.328 [2024-07-12 08:54:13.497293] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:38.328 [2024-07-12 08:54:13.497419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:38.328 [2024-07-12 08:54:13.497436] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:38.328 [2024-07-12 08:54:13.497461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:38.328 [2024-07-12 08:54:13.497471] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:38.328 [2024-07-12 08:54:13.497488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:38.328 [2024-07-12 08:54:13.497496] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:38.328 [2024-07-12 08:54:13.497519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.328 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.588 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:38.588 "name": "Existed_Raid", 00:27:38.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.588 "strip_size_kb": 0, 00:27:38.588 "state": "configuring", 00:27:38.588 "raid_level": "raid1", 00:27:38.588 "superblock": false, 00:27:38.588 "num_base_bdevs": 4, 00:27:38.588 "num_base_bdevs_discovered": 0, 00:27:38.588 "num_base_bdevs_operational": 4, 00:27:38.588 "base_bdevs_list": [ 00:27:38.588 { 00:27:38.588 "name": "BaseBdev1", 00:27:38.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.588 "is_configured": false, 00:27:38.588 "data_offset": 0, 00:27:38.588 "data_size": 0 00:27:38.588 }, 00:27:38.588 { 00:27:38.588 "name": "BaseBdev2", 00:27:38.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.588 "is_configured": false, 00:27:38.588 "data_offset": 0, 00:27:38.588 "data_size": 0 00:27:38.588 }, 00:27:38.588 { 00:27:38.588 "name": "BaseBdev3", 00:27:38.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.588 "is_configured": false, 00:27:38.588 "data_offset": 0, 00:27:38.588 "data_size": 0 00:27:38.588 }, 00:27:38.588 { 00:27:38.588 "name": "BaseBdev4", 00:27:38.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.588 "is_configured": false, 00:27:38.588 "data_offset": 0, 00:27:38.588 "data_size": 0 00:27:38.588 } 00:27:38.588 ] 00:27:38.588 }' 00:27:38.588 08:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:38.588 08:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.523 08:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:39.523 [2024-07-12 08:54:14.689481] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:39.523 [2024-07-12 08:54:14.689579] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:27:39.523 08:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:39.782 [2024-07-12 08:54:14.957540] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:39.782 [2024-07-12 08:54:14.957661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:39.782 [2024-07-12 08:54:14.957675] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:39.782 [2024-07-12 08:54:14.957739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:39.782 [2024-07-12 08:54:14.957750] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:39.782 [2024-07-12 08:54:14.957788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:39.782 [2024-07-12 08:54:14.957796] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:39.782 [2024-07-12 08:54:14.957821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:39.782 08:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:40.040 [2024-07-12 08:54:15.228937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.040 BaseBdev1 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:40.299 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:40.558 [ 00:27:40.558 { 00:27:40.558 "name": "BaseBdev1", 00:27:40.558 "aliases": [ 00:27:40.558 "90f83110-c1d1-48de-82e5-90b3d1aced84" 00:27:40.558 ], 00:27:40.558 "product_name": "Malloc disk", 00:27:40.558 "block_size": 512, 00:27:40.558 "num_blocks": 65536, 00:27:40.558 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:40.558 "assigned_rate_limits": { 00:27:40.558 "rw_ios_per_sec": 0, 00:27:40.558 "rw_mbytes_per_sec": 0, 00:27:40.558 "r_mbytes_per_sec": 0, 00:27:40.558 "w_mbytes_per_sec": 0 00:27:40.558 }, 00:27:40.558 "claimed": true, 00:27:40.558 "claim_type": "exclusive_write", 00:27:40.558 "zoned": false, 00:27:40.558 "supported_io_types": { 00:27:40.558 "read": true, 00:27:40.558 "write": true, 00:27:40.558 "unmap": true, 00:27:40.558 "flush": true, 00:27:40.558 "reset": true, 00:27:40.558 "nvme_admin": false, 00:27:40.558 "nvme_io": false, 00:27:40.558 "nvme_io_md": false, 00:27:40.558 "write_zeroes": true, 00:27:40.558 "zcopy": true, 00:27:40.558 "get_zone_info": false, 00:27:40.558 "zone_management": false, 00:27:40.558 "zone_append": false, 00:27:40.558 "compare": false, 00:27:40.558 "compare_and_write": false, 00:27:40.558 "abort": true, 00:27:40.558 "seek_hole": false, 00:27:40.558 "seek_data": false, 00:27:40.558 "copy": true, 00:27:40.558 "nvme_iov_md": false 00:27:40.558 }, 00:27:40.558 "memory_domains": [ 00:27:40.558 { 00:27:40.558 "dma_device_id": "system", 00:27:40.558 "dma_device_type": 1 00:27:40.558 }, 00:27:40.558 { 00:27:40.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.558 "dma_device_type": 2 00:27:40.558 } 00:27:40.558 ], 00:27:40.558 "driver_specific": {} 00:27:40.558 } 00:27:40.558 ] 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.558 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.816 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.816 08:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.074 08:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.074 "name": "Existed_Raid", 00:27:41.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.074 "strip_size_kb": 0, 00:27:41.074 "state": "configuring", 00:27:41.074 "raid_level": "raid1", 00:27:41.074 "superblock": false, 00:27:41.074 "num_base_bdevs": 4, 00:27:41.074 "num_base_bdevs_discovered": 1, 00:27:41.074 "num_base_bdevs_operational": 4, 00:27:41.074 "base_bdevs_list": [ 00:27:41.074 { 00:27:41.074 "name": "BaseBdev1", 00:27:41.074 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:41.074 "is_configured": true, 00:27:41.074 "data_offset": 0, 00:27:41.074 "data_size": 65536 00:27:41.074 }, 00:27:41.074 { 00:27:41.074 "name": "BaseBdev2", 00:27:41.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.074 "is_configured": false, 00:27:41.074 "data_offset": 0, 00:27:41.074 "data_size": 0 00:27:41.074 }, 00:27:41.074 { 00:27:41.074 "name": "BaseBdev3", 00:27:41.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.074 "is_configured": false, 00:27:41.074 "data_offset": 0, 00:27:41.074 "data_size": 0 00:27:41.074 }, 00:27:41.074 { 00:27:41.074 "name": "BaseBdev4", 00:27:41.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.074 "is_configured": false, 00:27:41.074 "data_offset": 0, 00:27:41.074 "data_size": 0 00:27:41.074 } 00:27:41.074 ] 00:27:41.074 }' 00:27:41.074 08:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.074 08:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.640 08:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:41.898 [2024-07-12 08:54:16.973451] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:41.898 [2024-07-12 08:54:16.973546] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:27:41.898 08:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:42.157 [2024-07-12 08:54:17.253532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:42.157 [2024-07-12 08:54:17.255683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:42.157 [2024-07-12 08:54:17.255760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:42.157 [2024-07-12 08:54:17.255775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:42.157 [2024-07-12 08:54:17.255802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:42.157 [2024-07-12 08:54:17.255812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:42.157 [2024-07-12 08:54:17.255840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.157 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.416 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.416 "name": "Existed_Raid", 00:27:42.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.416 "strip_size_kb": 0, 00:27:42.416 "state": "configuring", 00:27:42.416 "raid_level": "raid1", 00:27:42.416 "superblock": false, 00:27:42.416 "num_base_bdevs": 4, 00:27:42.416 "num_base_bdevs_discovered": 1, 00:27:42.416 "num_base_bdevs_operational": 4, 00:27:42.416 "base_bdevs_list": [ 00:27:42.416 { 00:27:42.416 "name": "BaseBdev1", 00:27:42.416 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:42.416 "is_configured": true, 00:27:42.416 "data_offset": 0, 00:27:42.416 "data_size": 65536 00:27:42.416 }, 00:27:42.416 { 00:27:42.416 "name": "BaseBdev2", 00:27:42.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.416 "is_configured": false, 00:27:42.416 "data_offset": 0, 00:27:42.416 "data_size": 0 00:27:42.416 }, 00:27:42.416 { 00:27:42.416 "name": "BaseBdev3", 00:27:42.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.416 "is_configured": false, 00:27:42.416 "data_offset": 0, 00:27:42.416 "data_size": 0 00:27:42.416 }, 00:27:42.416 { 00:27:42.416 "name": "BaseBdev4", 00:27:42.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.416 "is_configured": false, 00:27:42.416 "data_offset": 0, 00:27:42.416 "data_size": 0 00:27:42.416 } 00:27:42.416 ] 00:27:42.416 }' 00:27:42.416 08:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.416 08:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:43.350 [2024-07-12 08:54:18.526113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:43.350 BaseBdev2 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:43.350 08:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:43.916 08:54:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:43.916 [ 00:27:43.916 { 00:27:43.916 "name": "BaseBdev2", 00:27:43.916 "aliases": [ 00:27:43.916 "b1e00402-9884-4ece-aa13-8b69e8844ba2" 00:27:43.916 ], 00:27:43.916 "product_name": "Malloc disk", 00:27:43.916 "block_size": 512, 00:27:43.916 "num_blocks": 65536, 00:27:43.916 "uuid": "b1e00402-9884-4ece-aa13-8b69e8844ba2", 00:27:43.916 "assigned_rate_limits": { 00:27:43.916 "rw_ios_per_sec": 0, 00:27:43.916 "rw_mbytes_per_sec": 0, 00:27:43.916 "r_mbytes_per_sec": 0, 00:27:43.916 "w_mbytes_per_sec": 0 00:27:43.916 }, 00:27:43.917 "claimed": true, 00:27:43.917 "claim_type": "exclusive_write", 00:27:43.917 "zoned": false, 00:27:43.917 "supported_io_types": { 00:27:43.917 "read": true, 00:27:43.917 "write": true, 00:27:43.917 "unmap": true, 00:27:43.917 "flush": true, 00:27:43.917 "reset": true, 00:27:43.917 "nvme_admin": false, 00:27:43.917 "nvme_io": false, 00:27:43.917 "nvme_io_md": false, 00:27:43.917 "write_zeroes": true, 00:27:43.917 "zcopy": true, 00:27:43.917 "get_zone_info": false, 00:27:43.917 "zone_management": false, 00:27:43.917 "zone_append": false, 00:27:43.917 "compare": false, 00:27:43.917 "compare_and_write": false, 00:27:43.917 "abort": true, 00:27:43.917 "seek_hole": false, 00:27:43.917 "seek_data": false, 00:27:43.917 "copy": true, 00:27:43.917 "nvme_iov_md": false 00:27:43.917 }, 00:27:43.917 "memory_domains": [ 00:27:43.917 { 00:27:43.917 "dma_device_id": "system", 00:27:43.917 "dma_device_type": 1 00:27:43.917 }, 00:27:43.917 { 00:27:43.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.917 "dma_device_type": 2 00:27:43.917 } 00:27:43.917 ], 00:27:43.917 "driver_specific": {} 00:27:43.917 } 00:27:43.917 ] 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.917 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.176 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:44.176 "name": "Existed_Raid", 00:27:44.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.176 "strip_size_kb": 0, 00:27:44.176 "state": "configuring", 00:27:44.176 "raid_level": "raid1", 00:27:44.176 "superblock": false, 00:27:44.176 "num_base_bdevs": 4, 00:27:44.176 "num_base_bdevs_discovered": 2, 00:27:44.176 "num_base_bdevs_operational": 4, 00:27:44.176 "base_bdevs_list": [ 00:27:44.176 { 00:27:44.176 "name": "BaseBdev1", 00:27:44.176 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:44.176 "is_configured": true, 00:27:44.176 "data_offset": 0, 00:27:44.176 "data_size": 65536 00:27:44.176 }, 00:27:44.176 { 00:27:44.176 "name": "BaseBdev2", 00:27:44.176 "uuid": "b1e00402-9884-4ece-aa13-8b69e8844ba2", 00:27:44.176 "is_configured": true, 00:27:44.176 "data_offset": 0, 00:27:44.176 "data_size": 65536 00:27:44.176 }, 00:27:44.176 { 00:27:44.176 "name": "BaseBdev3", 00:27:44.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.176 "is_configured": false, 00:27:44.176 "data_offset": 0, 00:27:44.176 "data_size": 0 00:27:44.176 }, 00:27:44.176 { 00:27:44.176 "name": "BaseBdev4", 00:27:44.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.176 "is_configured": false, 00:27:44.176 "data_offset": 0, 00:27:44.176 "data_size": 0 00:27:44.176 } 00:27:44.176 ] 00:27:44.176 }' 00:27:44.176 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:44.176 08:54:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:45.113 08:54:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:45.113 [2024-07-12 08:54:20.226402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:45.113 BaseBdev3 00:27:45.113 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:27:45.113 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:45.113 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:45.113 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:45.113 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:45.113 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:45.113 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:45.371 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:45.629 [ 00:27:45.629 { 00:27:45.629 "name": "BaseBdev3", 00:27:45.629 "aliases": [ 00:27:45.629 "340f9be8-bc9c-4978-9027-e79841f0b7f5" 00:27:45.629 ], 00:27:45.629 "product_name": "Malloc disk", 00:27:45.629 "block_size": 512, 00:27:45.629 "num_blocks": 65536, 00:27:45.629 "uuid": "340f9be8-bc9c-4978-9027-e79841f0b7f5", 00:27:45.629 "assigned_rate_limits": { 00:27:45.629 "rw_ios_per_sec": 0, 00:27:45.629 "rw_mbytes_per_sec": 0, 00:27:45.629 "r_mbytes_per_sec": 0, 00:27:45.629 "w_mbytes_per_sec": 0 00:27:45.629 }, 00:27:45.629 "claimed": true, 00:27:45.629 "claim_type": "exclusive_write", 00:27:45.629 "zoned": false, 00:27:45.629 "supported_io_types": { 00:27:45.629 "read": true, 00:27:45.629 "write": true, 00:27:45.629 "unmap": true, 00:27:45.629 "flush": true, 00:27:45.629 "reset": true, 00:27:45.629 "nvme_admin": false, 00:27:45.629 "nvme_io": false, 00:27:45.629 "nvme_io_md": false, 00:27:45.629 "write_zeroes": true, 00:27:45.629 "zcopy": true, 00:27:45.629 "get_zone_info": false, 00:27:45.629 "zone_management": false, 00:27:45.629 "zone_append": false, 00:27:45.629 "compare": false, 00:27:45.629 "compare_and_write": false, 00:27:45.629 "abort": true, 00:27:45.629 "seek_hole": false, 00:27:45.629 "seek_data": false, 00:27:45.629 "copy": true, 00:27:45.629 "nvme_iov_md": false 00:27:45.629 }, 00:27:45.629 "memory_domains": [ 00:27:45.629 { 00:27:45.629 "dma_device_id": "system", 00:27:45.629 "dma_device_type": 1 00:27:45.629 }, 00:27:45.629 { 00:27:45.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.629 "dma_device_type": 2 00:27:45.629 } 00:27:45.629 ], 00:27:45.629 "driver_specific": {} 00:27:45.629 } 00:27:45.629 ] 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.629 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.887 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.887 "name": "Existed_Raid", 00:27:45.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.887 "strip_size_kb": 0, 00:27:45.887 "state": "configuring", 00:27:45.887 "raid_level": "raid1", 00:27:45.887 "superblock": false, 00:27:45.887 "num_base_bdevs": 4, 00:27:45.887 "num_base_bdevs_discovered": 3, 00:27:45.887 "num_base_bdevs_operational": 4, 00:27:45.887 "base_bdevs_list": [ 00:27:45.887 { 00:27:45.887 "name": "BaseBdev1", 00:27:45.887 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:45.887 "is_configured": true, 00:27:45.887 "data_offset": 0, 00:27:45.887 "data_size": 65536 00:27:45.887 }, 00:27:45.887 { 00:27:45.887 "name": "BaseBdev2", 00:27:45.887 "uuid": "b1e00402-9884-4ece-aa13-8b69e8844ba2", 00:27:45.887 "is_configured": true, 00:27:45.887 "data_offset": 0, 00:27:45.887 "data_size": 65536 00:27:45.887 }, 00:27:45.887 { 00:27:45.887 "name": "BaseBdev3", 00:27:45.887 "uuid": "340f9be8-bc9c-4978-9027-e79841f0b7f5", 00:27:45.887 "is_configured": true, 00:27:45.887 "data_offset": 0, 00:27:45.887 "data_size": 65536 00:27:45.887 }, 00:27:45.887 { 00:27:45.888 "name": "BaseBdev4", 00:27:45.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.888 "is_configured": false, 00:27:45.888 "data_offset": 0, 00:27:45.888 "data_size": 0 00:27:45.888 } 00:27:45.888 ] 00:27:45.888 }' 00:27:45.888 08:54:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.888 08:54:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:46.823 [2024-07-12 08:54:21.986912] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:46.823 [2024-07-12 08:54:21.987001] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:27:46.823 [2024-07-12 08:54:21.987013] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:46.823 [2024-07-12 08:54:21.987164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:27:46.823 [2024-07-12 08:54:21.987637] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:27:46.823 [2024-07-12 08:54:21.987661] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:27:46.823 [2024-07-12 08:54:21.987946] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.823 BaseBdev4 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:46.823 08:54:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:47.082 08:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:47.340 [ 00:27:47.340 { 00:27:47.340 "name": "BaseBdev4", 00:27:47.340 "aliases": [ 00:27:47.340 "64a1a9c2-ff8e-430d-b3e7-dc6eef0cbe7e" 00:27:47.340 ], 00:27:47.340 "product_name": "Malloc disk", 00:27:47.340 "block_size": 512, 00:27:47.340 "num_blocks": 65536, 00:27:47.340 "uuid": "64a1a9c2-ff8e-430d-b3e7-dc6eef0cbe7e", 00:27:47.340 "assigned_rate_limits": { 00:27:47.340 "rw_ios_per_sec": 0, 00:27:47.340 "rw_mbytes_per_sec": 0, 00:27:47.340 "r_mbytes_per_sec": 0, 00:27:47.340 "w_mbytes_per_sec": 0 00:27:47.340 }, 00:27:47.340 "claimed": true, 00:27:47.340 "claim_type": "exclusive_write", 00:27:47.340 "zoned": false, 00:27:47.340 "supported_io_types": { 00:27:47.340 "read": true, 00:27:47.340 "write": true, 00:27:47.340 "unmap": true, 00:27:47.340 "flush": true, 00:27:47.340 "reset": true, 00:27:47.340 "nvme_admin": false, 00:27:47.340 "nvme_io": false, 00:27:47.340 "nvme_io_md": false, 00:27:47.340 "write_zeroes": true, 00:27:47.340 "zcopy": true, 00:27:47.341 "get_zone_info": false, 00:27:47.341 "zone_management": false, 00:27:47.341 "zone_append": false, 00:27:47.341 "compare": false, 00:27:47.341 "compare_and_write": false, 00:27:47.341 "abort": true, 00:27:47.341 "seek_hole": false, 00:27:47.341 "seek_data": false, 00:27:47.341 "copy": true, 00:27:47.341 "nvme_iov_md": false 00:27:47.341 }, 00:27:47.341 "memory_domains": [ 00:27:47.341 { 00:27:47.341 "dma_device_id": "system", 00:27:47.341 "dma_device_type": 1 00:27:47.341 }, 00:27:47.341 { 00:27:47.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.341 "dma_device_type": 2 00:27:47.341 } 00:27:47.341 ], 00:27:47.341 "driver_specific": {} 00:27:47.341 } 00:27:47.341 ] 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.341 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.599 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.599 "name": "Existed_Raid", 00:27:47.599 "uuid": "ebcf1164-0b9a-4e8d-9303-76fe6536ed35", 00:27:47.599 "strip_size_kb": 0, 00:27:47.599 "state": "online", 00:27:47.599 "raid_level": "raid1", 00:27:47.599 "superblock": false, 00:27:47.599 "num_base_bdevs": 4, 00:27:47.599 "num_base_bdevs_discovered": 4, 00:27:47.599 "num_base_bdevs_operational": 4, 00:27:47.599 "base_bdevs_list": [ 00:27:47.599 { 00:27:47.599 "name": "BaseBdev1", 00:27:47.599 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:47.599 "is_configured": true, 00:27:47.599 "data_offset": 0, 00:27:47.599 "data_size": 65536 00:27:47.599 }, 00:27:47.599 { 00:27:47.599 "name": "BaseBdev2", 00:27:47.599 "uuid": "b1e00402-9884-4ece-aa13-8b69e8844ba2", 00:27:47.599 "is_configured": true, 00:27:47.599 "data_offset": 0, 00:27:47.599 "data_size": 65536 00:27:47.599 }, 00:27:47.599 { 00:27:47.599 "name": "BaseBdev3", 00:27:47.599 "uuid": "340f9be8-bc9c-4978-9027-e79841f0b7f5", 00:27:47.599 "is_configured": true, 00:27:47.599 "data_offset": 0, 00:27:47.599 "data_size": 65536 00:27:47.599 }, 00:27:47.599 { 00:27:47.599 "name": "BaseBdev4", 00:27:47.599 "uuid": "64a1a9c2-ff8e-430d-b3e7-dc6eef0cbe7e", 00:27:47.599 "is_configured": true, 00:27:47.599 "data_offset": 0, 00:27:47.599 "data_size": 65536 00:27:47.599 } 00:27:47.599 ] 00:27:47.599 }' 00:27:47.599 08:54:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.599 08:54:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.552 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:48.552 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:48.552 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:48.552 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:48.552 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:48.552 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:48.553 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:48.553 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:48.553 [2024-07-12 08:54:23.707724] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:48.553 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:48.553 "name": "Existed_Raid", 00:27:48.553 "aliases": [ 00:27:48.553 "ebcf1164-0b9a-4e8d-9303-76fe6536ed35" 00:27:48.553 ], 00:27:48.553 "product_name": "Raid Volume", 00:27:48.553 "block_size": 512, 00:27:48.553 "num_blocks": 65536, 00:27:48.553 "uuid": "ebcf1164-0b9a-4e8d-9303-76fe6536ed35", 00:27:48.553 "assigned_rate_limits": { 00:27:48.553 "rw_ios_per_sec": 0, 00:27:48.553 "rw_mbytes_per_sec": 0, 00:27:48.553 "r_mbytes_per_sec": 0, 00:27:48.553 "w_mbytes_per_sec": 0 00:27:48.553 }, 00:27:48.553 "claimed": false, 00:27:48.553 "zoned": false, 00:27:48.553 "supported_io_types": { 00:27:48.553 "read": true, 00:27:48.553 "write": true, 00:27:48.553 "unmap": false, 00:27:48.553 "flush": false, 00:27:48.553 "reset": true, 00:27:48.553 "nvme_admin": false, 00:27:48.553 "nvme_io": false, 00:27:48.553 "nvme_io_md": false, 00:27:48.553 "write_zeroes": true, 00:27:48.553 "zcopy": false, 00:27:48.553 "get_zone_info": false, 00:27:48.553 "zone_management": false, 00:27:48.553 "zone_append": false, 00:27:48.553 "compare": false, 00:27:48.553 "compare_and_write": false, 00:27:48.553 "abort": false, 00:27:48.553 "seek_hole": false, 00:27:48.553 "seek_data": false, 00:27:48.553 "copy": false, 00:27:48.553 "nvme_iov_md": false 00:27:48.553 }, 00:27:48.553 "memory_domains": [ 00:27:48.553 { 00:27:48.553 "dma_device_id": "system", 00:27:48.553 "dma_device_type": 1 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.553 "dma_device_type": 2 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "dma_device_id": "system", 00:27:48.553 "dma_device_type": 1 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.553 "dma_device_type": 2 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "dma_device_id": "system", 00:27:48.553 "dma_device_type": 1 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.553 "dma_device_type": 2 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "dma_device_id": "system", 00:27:48.553 "dma_device_type": 1 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.553 "dma_device_type": 2 00:27:48.553 } 00:27:48.553 ], 00:27:48.553 "driver_specific": { 00:27:48.553 "raid": { 00:27:48.553 "uuid": "ebcf1164-0b9a-4e8d-9303-76fe6536ed35", 00:27:48.553 "strip_size_kb": 0, 00:27:48.553 "state": "online", 00:27:48.553 "raid_level": "raid1", 00:27:48.553 "superblock": false, 00:27:48.553 "num_base_bdevs": 4, 00:27:48.553 "num_base_bdevs_discovered": 4, 00:27:48.553 "num_base_bdevs_operational": 4, 00:27:48.553 "base_bdevs_list": [ 00:27:48.553 { 00:27:48.553 "name": "BaseBdev1", 00:27:48.553 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:48.553 "is_configured": true, 00:27:48.553 "data_offset": 0, 00:27:48.553 "data_size": 65536 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "name": "BaseBdev2", 00:27:48.553 "uuid": "b1e00402-9884-4ece-aa13-8b69e8844ba2", 00:27:48.553 "is_configured": true, 00:27:48.553 "data_offset": 0, 00:27:48.553 "data_size": 65536 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "name": "BaseBdev3", 00:27:48.553 "uuid": "340f9be8-bc9c-4978-9027-e79841f0b7f5", 00:27:48.553 "is_configured": true, 00:27:48.553 "data_offset": 0, 00:27:48.553 "data_size": 65536 00:27:48.553 }, 00:27:48.553 { 00:27:48.553 "name": "BaseBdev4", 00:27:48.553 "uuid": "64a1a9c2-ff8e-430d-b3e7-dc6eef0cbe7e", 00:27:48.553 "is_configured": true, 00:27:48.553 "data_offset": 0, 00:27:48.553 "data_size": 65536 00:27:48.553 } 00:27:48.553 ] 00:27:48.553 } 00:27:48.553 } 00:27:48.553 }' 00:27:48.553 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:48.810 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:48.810 BaseBdev2 00:27:48.810 BaseBdev3 00:27:48.810 BaseBdev4' 00:27:48.810 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:48.810 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:48.810 08:54:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:49.069 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:49.069 "name": "BaseBdev1", 00:27:49.069 "aliases": [ 00:27:49.069 "90f83110-c1d1-48de-82e5-90b3d1aced84" 00:27:49.069 ], 00:27:49.069 "product_name": "Malloc disk", 00:27:49.069 "block_size": 512, 00:27:49.069 "num_blocks": 65536, 00:27:49.069 "uuid": "90f83110-c1d1-48de-82e5-90b3d1aced84", 00:27:49.069 "assigned_rate_limits": { 00:27:49.069 "rw_ios_per_sec": 0, 00:27:49.069 "rw_mbytes_per_sec": 0, 00:27:49.069 "r_mbytes_per_sec": 0, 00:27:49.069 "w_mbytes_per_sec": 0 00:27:49.069 }, 00:27:49.069 "claimed": true, 00:27:49.069 "claim_type": "exclusive_write", 00:27:49.069 "zoned": false, 00:27:49.069 "supported_io_types": { 00:27:49.069 "read": true, 00:27:49.069 "write": true, 00:27:49.069 "unmap": true, 00:27:49.069 "flush": true, 00:27:49.069 "reset": true, 00:27:49.069 "nvme_admin": false, 00:27:49.069 "nvme_io": false, 00:27:49.069 "nvme_io_md": false, 00:27:49.069 "write_zeroes": true, 00:27:49.069 "zcopy": true, 00:27:49.069 "get_zone_info": false, 00:27:49.069 "zone_management": false, 00:27:49.069 "zone_append": false, 00:27:49.069 "compare": false, 00:27:49.069 "compare_and_write": false, 00:27:49.069 "abort": true, 00:27:49.069 "seek_hole": false, 00:27:49.069 "seek_data": false, 00:27:49.069 "copy": true, 00:27:49.069 "nvme_iov_md": false 00:27:49.069 }, 00:27:49.069 "memory_domains": [ 00:27:49.069 { 00:27:49.069 "dma_device_id": "system", 00:27:49.069 "dma_device_type": 1 00:27:49.069 }, 00:27:49.069 { 00:27:49.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.069 "dma_device_type": 2 00:27:49.069 } 00:27:49.069 ], 00:27:49.069 "driver_specific": {} 00:27:49.069 }' 00:27:49.069 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.069 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.069 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:49.069 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.069 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:49.327 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:49.585 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:49.585 "name": "BaseBdev2", 00:27:49.585 "aliases": [ 00:27:49.585 "b1e00402-9884-4ece-aa13-8b69e8844ba2" 00:27:49.585 ], 00:27:49.585 "product_name": "Malloc disk", 00:27:49.585 "block_size": 512, 00:27:49.585 "num_blocks": 65536, 00:27:49.585 "uuid": "b1e00402-9884-4ece-aa13-8b69e8844ba2", 00:27:49.585 "assigned_rate_limits": { 00:27:49.585 "rw_ios_per_sec": 0, 00:27:49.585 "rw_mbytes_per_sec": 0, 00:27:49.585 "r_mbytes_per_sec": 0, 00:27:49.585 "w_mbytes_per_sec": 0 00:27:49.585 }, 00:27:49.585 "claimed": true, 00:27:49.585 "claim_type": "exclusive_write", 00:27:49.585 "zoned": false, 00:27:49.585 "supported_io_types": { 00:27:49.585 "read": true, 00:27:49.585 "write": true, 00:27:49.585 "unmap": true, 00:27:49.585 "flush": true, 00:27:49.585 "reset": true, 00:27:49.585 "nvme_admin": false, 00:27:49.585 "nvme_io": false, 00:27:49.585 "nvme_io_md": false, 00:27:49.585 "write_zeroes": true, 00:27:49.585 "zcopy": true, 00:27:49.585 "get_zone_info": false, 00:27:49.585 "zone_management": false, 00:27:49.585 "zone_append": false, 00:27:49.585 "compare": false, 00:27:49.585 "compare_and_write": false, 00:27:49.585 "abort": true, 00:27:49.585 "seek_hole": false, 00:27:49.585 "seek_data": false, 00:27:49.585 "copy": true, 00:27:49.585 "nvme_iov_md": false 00:27:49.585 }, 00:27:49.585 "memory_domains": [ 00:27:49.585 { 00:27:49.585 "dma_device_id": "system", 00:27:49.585 "dma_device_type": 1 00:27:49.585 }, 00:27:49.585 { 00:27:49.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.585 "dma_device_type": 2 00:27:49.585 } 00:27:49.585 ], 00:27:49.585 "driver_specific": {} 00:27:49.585 }' 00:27:49.585 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.585 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.843 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:49.843 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.844 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.844 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:49.844 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:49.844 08:54:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.102 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:50.102 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.102 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.102 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:50.102 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:50.102 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:50.102 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:50.360 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:50.360 "name": "BaseBdev3", 00:27:50.360 "aliases": [ 00:27:50.360 "340f9be8-bc9c-4978-9027-e79841f0b7f5" 00:27:50.360 ], 00:27:50.360 "product_name": "Malloc disk", 00:27:50.360 "block_size": 512, 00:27:50.360 "num_blocks": 65536, 00:27:50.360 "uuid": "340f9be8-bc9c-4978-9027-e79841f0b7f5", 00:27:50.360 "assigned_rate_limits": { 00:27:50.360 "rw_ios_per_sec": 0, 00:27:50.360 "rw_mbytes_per_sec": 0, 00:27:50.360 "r_mbytes_per_sec": 0, 00:27:50.360 "w_mbytes_per_sec": 0 00:27:50.360 }, 00:27:50.360 "claimed": true, 00:27:50.360 "claim_type": "exclusive_write", 00:27:50.360 "zoned": false, 00:27:50.360 "supported_io_types": { 00:27:50.360 "read": true, 00:27:50.360 "write": true, 00:27:50.360 "unmap": true, 00:27:50.360 "flush": true, 00:27:50.360 "reset": true, 00:27:50.360 "nvme_admin": false, 00:27:50.360 "nvme_io": false, 00:27:50.360 "nvme_io_md": false, 00:27:50.360 "write_zeroes": true, 00:27:50.360 "zcopy": true, 00:27:50.360 "get_zone_info": false, 00:27:50.360 "zone_management": false, 00:27:50.360 "zone_append": false, 00:27:50.361 "compare": false, 00:27:50.361 "compare_and_write": false, 00:27:50.361 "abort": true, 00:27:50.361 "seek_hole": false, 00:27:50.361 "seek_data": false, 00:27:50.361 "copy": true, 00:27:50.361 "nvme_iov_md": false 00:27:50.361 }, 00:27:50.361 "memory_domains": [ 00:27:50.361 { 00:27:50.361 "dma_device_id": "system", 00:27:50.361 "dma_device_type": 1 00:27:50.361 }, 00:27:50.361 { 00:27:50.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.361 "dma_device_type": 2 00:27:50.361 } 00:27:50.361 ], 00:27:50.361 "driver_specific": {} 00:27:50.361 }' 00:27:50.361 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.361 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:50.619 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.876 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.876 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:50.876 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:50.876 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:50.876 08:54:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:51.135 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:51.135 "name": "BaseBdev4", 00:27:51.135 "aliases": [ 00:27:51.135 "64a1a9c2-ff8e-430d-b3e7-dc6eef0cbe7e" 00:27:51.135 ], 00:27:51.135 "product_name": "Malloc disk", 00:27:51.135 "block_size": 512, 00:27:51.135 "num_blocks": 65536, 00:27:51.135 "uuid": "64a1a9c2-ff8e-430d-b3e7-dc6eef0cbe7e", 00:27:51.135 "assigned_rate_limits": { 00:27:51.135 "rw_ios_per_sec": 0, 00:27:51.135 "rw_mbytes_per_sec": 0, 00:27:51.135 "r_mbytes_per_sec": 0, 00:27:51.135 "w_mbytes_per_sec": 0 00:27:51.135 }, 00:27:51.135 "claimed": true, 00:27:51.135 "claim_type": "exclusive_write", 00:27:51.135 "zoned": false, 00:27:51.135 "supported_io_types": { 00:27:51.135 "read": true, 00:27:51.135 "write": true, 00:27:51.135 "unmap": true, 00:27:51.135 "flush": true, 00:27:51.135 "reset": true, 00:27:51.135 "nvme_admin": false, 00:27:51.135 "nvme_io": false, 00:27:51.135 "nvme_io_md": false, 00:27:51.135 "write_zeroes": true, 00:27:51.135 "zcopy": true, 00:27:51.135 "get_zone_info": false, 00:27:51.135 "zone_management": false, 00:27:51.135 "zone_append": false, 00:27:51.135 "compare": false, 00:27:51.135 "compare_and_write": false, 00:27:51.135 "abort": true, 00:27:51.135 "seek_hole": false, 00:27:51.135 "seek_data": false, 00:27:51.135 "copy": true, 00:27:51.135 "nvme_iov_md": false 00:27:51.135 }, 00:27:51.135 "memory_domains": [ 00:27:51.135 { 00:27:51.135 "dma_device_id": "system", 00:27:51.135 "dma_device_type": 1 00:27:51.135 }, 00:27:51.135 { 00:27:51.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.135 "dma_device_type": 2 00:27:51.135 } 00:27:51.135 ], 00:27:51.135 "driver_specific": {} 00:27:51.135 }' 00:27:51.135 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:51.135 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:51.135 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:51.135 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:51.135 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:51.393 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:51.393 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:51.393 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:51.393 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:51.394 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:51.394 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:51.652 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:51.652 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:51.910 [2024-07-12 08:54:26.852806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.910 08:54:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.168 08:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.168 "name": "Existed_Raid", 00:27:52.168 "uuid": "ebcf1164-0b9a-4e8d-9303-76fe6536ed35", 00:27:52.168 "strip_size_kb": 0, 00:27:52.168 "state": "online", 00:27:52.168 "raid_level": "raid1", 00:27:52.168 "superblock": false, 00:27:52.168 "num_base_bdevs": 4, 00:27:52.168 "num_base_bdevs_discovered": 3, 00:27:52.168 "num_base_bdevs_operational": 3, 00:27:52.168 "base_bdevs_list": [ 00:27:52.168 { 00:27:52.168 "name": null, 00:27:52.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.168 "is_configured": false, 00:27:52.168 "data_offset": 0, 00:27:52.168 "data_size": 65536 00:27:52.168 }, 00:27:52.168 { 00:27:52.168 "name": "BaseBdev2", 00:27:52.168 "uuid": "b1e00402-9884-4ece-aa13-8b69e8844ba2", 00:27:52.168 "is_configured": true, 00:27:52.168 "data_offset": 0, 00:27:52.168 "data_size": 65536 00:27:52.168 }, 00:27:52.168 { 00:27:52.168 "name": "BaseBdev3", 00:27:52.168 "uuid": "340f9be8-bc9c-4978-9027-e79841f0b7f5", 00:27:52.168 "is_configured": true, 00:27:52.168 "data_offset": 0, 00:27:52.168 "data_size": 65536 00:27:52.168 }, 00:27:52.168 { 00:27:52.168 "name": "BaseBdev4", 00:27:52.168 "uuid": "64a1a9c2-ff8e-430d-b3e7-dc6eef0cbe7e", 00:27:52.168 "is_configured": true, 00:27:52.168 "data_offset": 0, 00:27:52.168 "data_size": 65536 00:27:52.168 } 00:27:52.168 ] 00:27:52.168 }' 00:27:52.168 08:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.168 08:54:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.735 08:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:52.735 08:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:52.735 08:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.735 08:54:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:52.993 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:52.993 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:52.993 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:53.252 [2024-07-12 08:54:28.384988] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:53.512 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:53.512 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:53.512 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.512 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:53.771 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:53.771 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:53.771 08:54:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:54.030 [2024-07-12 08:54:28.986717] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:54.030 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:54.030 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:54.030 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.030 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:54.289 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:54.289 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:54.289 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:54.547 [2024-07-12 08:54:29.605286] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:54.547 [2024-07-12 08:54:29.605461] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:54.547 [2024-07-12 08:54:29.683685] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:54.547 [2024-07-12 08:54:29.683779] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:54.547 [2024-07-12 08:54:29.683794] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:27:54.547 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:54.547 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:54.547 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.547 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:54.806 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:54.806 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:54.806 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:27:54.806 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:27:54.806 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:54.806 08:54:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:55.065 BaseBdev2 00:27:55.065 08:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:27:55.065 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:27:55.065 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:55.065 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:55.065 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:55.065 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:55.065 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:55.324 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:55.582 [ 00:27:55.582 { 00:27:55.582 "name": "BaseBdev2", 00:27:55.582 "aliases": [ 00:27:55.582 "78e5c561-75d2-4f1b-ad92-b031fd7337be" 00:27:55.582 ], 00:27:55.582 "product_name": "Malloc disk", 00:27:55.582 "block_size": 512, 00:27:55.582 "num_blocks": 65536, 00:27:55.582 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:27:55.582 "assigned_rate_limits": { 00:27:55.582 "rw_ios_per_sec": 0, 00:27:55.582 "rw_mbytes_per_sec": 0, 00:27:55.582 "r_mbytes_per_sec": 0, 00:27:55.582 "w_mbytes_per_sec": 0 00:27:55.582 }, 00:27:55.582 "claimed": false, 00:27:55.582 "zoned": false, 00:27:55.582 "supported_io_types": { 00:27:55.582 "read": true, 00:27:55.582 "write": true, 00:27:55.582 "unmap": true, 00:27:55.582 "flush": true, 00:27:55.582 "reset": true, 00:27:55.582 "nvme_admin": false, 00:27:55.582 "nvme_io": false, 00:27:55.582 "nvme_io_md": false, 00:27:55.582 "write_zeroes": true, 00:27:55.582 "zcopy": true, 00:27:55.582 "get_zone_info": false, 00:27:55.582 "zone_management": false, 00:27:55.582 "zone_append": false, 00:27:55.582 "compare": false, 00:27:55.582 "compare_and_write": false, 00:27:55.582 "abort": true, 00:27:55.582 "seek_hole": false, 00:27:55.582 "seek_data": false, 00:27:55.582 "copy": true, 00:27:55.582 "nvme_iov_md": false 00:27:55.582 }, 00:27:55.582 "memory_domains": [ 00:27:55.582 { 00:27:55.582 "dma_device_id": "system", 00:27:55.582 "dma_device_type": 1 00:27:55.582 }, 00:27:55.582 { 00:27:55.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.582 "dma_device_type": 2 00:27:55.582 } 00:27:55.582 ], 00:27:55.582 "driver_specific": {} 00:27:55.582 } 00:27:55.582 ] 00:27:55.582 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:55.582 08:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:55.582 08:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:55.582 08:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:55.841 BaseBdev3 00:27:55.841 08:54:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:27:55.841 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:27:55.841 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:55.841 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:55.841 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:55.841 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:55.841 08:54:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:56.128 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:56.387 [ 00:27:56.387 { 00:27:56.387 "name": "BaseBdev3", 00:27:56.387 "aliases": [ 00:27:56.387 "ba3900f0-96eb-4489-9c67-d74aa24a4fbd" 00:27:56.387 ], 00:27:56.387 "product_name": "Malloc disk", 00:27:56.387 "block_size": 512, 00:27:56.387 "num_blocks": 65536, 00:27:56.387 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:27:56.387 "assigned_rate_limits": { 00:27:56.387 "rw_ios_per_sec": 0, 00:27:56.387 "rw_mbytes_per_sec": 0, 00:27:56.387 "r_mbytes_per_sec": 0, 00:27:56.387 "w_mbytes_per_sec": 0 00:27:56.387 }, 00:27:56.387 "claimed": false, 00:27:56.387 "zoned": false, 00:27:56.387 "supported_io_types": { 00:27:56.387 "read": true, 00:27:56.387 "write": true, 00:27:56.387 "unmap": true, 00:27:56.387 "flush": true, 00:27:56.387 "reset": true, 00:27:56.387 "nvme_admin": false, 00:27:56.387 "nvme_io": false, 00:27:56.387 "nvme_io_md": false, 00:27:56.387 "write_zeroes": true, 00:27:56.387 "zcopy": true, 00:27:56.387 "get_zone_info": false, 00:27:56.387 "zone_management": false, 00:27:56.387 "zone_append": false, 00:27:56.387 "compare": false, 00:27:56.387 "compare_and_write": false, 00:27:56.387 "abort": true, 00:27:56.387 "seek_hole": false, 00:27:56.387 "seek_data": false, 00:27:56.387 "copy": true, 00:27:56.387 "nvme_iov_md": false 00:27:56.387 }, 00:27:56.387 "memory_domains": [ 00:27:56.387 { 00:27:56.387 "dma_device_id": "system", 00:27:56.387 "dma_device_type": 1 00:27:56.387 }, 00:27:56.387 { 00:27:56.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:56.387 "dma_device_type": 2 00:27:56.387 } 00:27:56.387 ], 00:27:56.387 "driver_specific": {} 00:27:56.387 } 00:27:56.387 ] 00:27:56.387 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:56.387 08:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:56.387 08:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:56.387 08:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:56.646 BaseBdev4 00:27:56.646 08:54:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:27:56.646 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:27:56.646 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:56.646 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:56.646 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:56.646 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:56.646 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:56.905 08:54:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:57.164 [ 00:27:57.164 { 00:27:57.164 "name": "BaseBdev4", 00:27:57.164 "aliases": [ 00:27:57.164 "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4" 00:27:57.164 ], 00:27:57.164 "product_name": "Malloc disk", 00:27:57.164 "block_size": 512, 00:27:57.164 "num_blocks": 65536, 00:27:57.164 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:27:57.164 "assigned_rate_limits": { 00:27:57.164 "rw_ios_per_sec": 0, 00:27:57.164 "rw_mbytes_per_sec": 0, 00:27:57.164 "r_mbytes_per_sec": 0, 00:27:57.164 "w_mbytes_per_sec": 0 00:27:57.164 }, 00:27:57.164 "claimed": false, 00:27:57.164 "zoned": false, 00:27:57.164 "supported_io_types": { 00:27:57.164 "read": true, 00:27:57.164 "write": true, 00:27:57.164 "unmap": true, 00:27:57.164 "flush": true, 00:27:57.164 "reset": true, 00:27:57.164 "nvme_admin": false, 00:27:57.164 "nvme_io": false, 00:27:57.164 "nvme_io_md": false, 00:27:57.164 "write_zeroes": true, 00:27:57.164 "zcopy": true, 00:27:57.164 "get_zone_info": false, 00:27:57.164 "zone_management": false, 00:27:57.164 "zone_append": false, 00:27:57.164 "compare": false, 00:27:57.164 "compare_and_write": false, 00:27:57.164 "abort": true, 00:27:57.165 "seek_hole": false, 00:27:57.165 "seek_data": false, 00:27:57.165 "copy": true, 00:27:57.165 "nvme_iov_md": false 00:27:57.165 }, 00:27:57.165 "memory_domains": [ 00:27:57.165 { 00:27:57.165 "dma_device_id": "system", 00:27:57.165 "dma_device_type": 1 00:27:57.165 }, 00:27:57.165 { 00:27:57.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.165 "dma_device_type": 2 00:27:57.165 } 00:27:57.165 ], 00:27:57.165 "driver_specific": {} 00:27:57.165 } 00:27:57.165 ] 00:27:57.165 08:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:27:57.165 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:57.165 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:57.165 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:57.424 [2024-07-12 08:54:32.430202] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:57.424 [2024-07-12 08:54:32.430311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:57.424 [2024-07-12 08:54:32.430368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:57.424 [2024-07-12 08:54:32.432475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:57.424 [2024-07-12 08:54:32.432558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.424 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:57.683 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.683 "name": "Existed_Raid", 00:27:57.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.683 "strip_size_kb": 0, 00:27:57.683 "state": "configuring", 00:27:57.683 "raid_level": "raid1", 00:27:57.683 "superblock": false, 00:27:57.683 "num_base_bdevs": 4, 00:27:57.683 "num_base_bdevs_discovered": 3, 00:27:57.683 "num_base_bdevs_operational": 4, 00:27:57.683 "base_bdevs_list": [ 00:27:57.683 { 00:27:57.683 "name": "BaseBdev1", 00:27:57.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.683 "is_configured": false, 00:27:57.683 "data_offset": 0, 00:27:57.683 "data_size": 0 00:27:57.683 }, 00:27:57.683 { 00:27:57.683 "name": "BaseBdev2", 00:27:57.683 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:27:57.683 "is_configured": true, 00:27:57.683 "data_offset": 0, 00:27:57.683 "data_size": 65536 00:27:57.683 }, 00:27:57.683 { 00:27:57.683 "name": "BaseBdev3", 00:27:57.683 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:27:57.683 "is_configured": true, 00:27:57.683 "data_offset": 0, 00:27:57.683 "data_size": 65536 00:27:57.683 }, 00:27:57.683 { 00:27:57.683 "name": "BaseBdev4", 00:27:57.683 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:27:57.683 "is_configured": true, 00:27:57.683 "data_offset": 0, 00:27:57.683 "data_size": 65536 00:27:57.683 } 00:27:57.683 ] 00:27:57.683 }' 00:27:57.683 08:54:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.683 08:54:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.271 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:58.529 [2024-07-12 08:54:33.638449] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.530 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:58.788 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.788 "name": "Existed_Raid", 00:27:58.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.788 "strip_size_kb": 0, 00:27:58.788 "state": "configuring", 00:27:58.788 "raid_level": "raid1", 00:27:58.788 "superblock": false, 00:27:58.788 "num_base_bdevs": 4, 00:27:58.788 "num_base_bdevs_discovered": 2, 00:27:58.788 "num_base_bdevs_operational": 4, 00:27:58.788 "base_bdevs_list": [ 00:27:58.788 { 00:27:58.788 "name": "BaseBdev1", 00:27:58.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.788 "is_configured": false, 00:27:58.788 "data_offset": 0, 00:27:58.788 "data_size": 0 00:27:58.788 }, 00:27:58.788 { 00:27:58.788 "name": null, 00:27:58.788 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:27:58.788 "is_configured": false, 00:27:58.788 "data_offset": 0, 00:27:58.788 "data_size": 65536 00:27:58.788 }, 00:27:58.788 { 00:27:58.788 "name": "BaseBdev3", 00:27:58.788 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:27:58.788 "is_configured": true, 00:27:58.788 "data_offset": 0, 00:27:58.788 "data_size": 65536 00:27:58.788 }, 00:27:58.788 { 00:27:58.788 "name": "BaseBdev4", 00:27:58.788 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:27:58.788 "is_configured": true, 00:27:58.788 "data_offset": 0, 00:27:58.788 "data_size": 65536 00:27:58.788 } 00:27:58.788 ] 00:27:58.788 }' 00:27:58.788 08:54:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.788 08:54:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.724 08:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.724 08:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:59.724 08:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:27:59.724 08:54:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:59.983 [2024-07-12 08:54:35.141986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:59.983 BaseBdev1 00:27:59.983 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:27:59.983 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:27:59.983 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:59.983 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:27:59.983 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:59.983 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:59.983 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:00.242 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:00.501 [ 00:28:00.501 { 00:28:00.501 "name": "BaseBdev1", 00:28:00.501 "aliases": [ 00:28:00.501 "63180d8e-3607-41f8-a19a-89984ccf986e" 00:28:00.501 ], 00:28:00.501 "product_name": "Malloc disk", 00:28:00.501 "block_size": 512, 00:28:00.501 "num_blocks": 65536, 00:28:00.501 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:00.501 "assigned_rate_limits": { 00:28:00.501 "rw_ios_per_sec": 0, 00:28:00.501 "rw_mbytes_per_sec": 0, 00:28:00.501 "r_mbytes_per_sec": 0, 00:28:00.501 "w_mbytes_per_sec": 0 00:28:00.501 }, 00:28:00.501 "claimed": true, 00:28:00.501 "claim_type": "exclusive_write", 00:28:00.501 "zoned": false, 00:28:00.501 "supported_io_types": { 00:28:00.501 "read": true, 00:28:00.501 "write": true, 00:28:00.501 "unmap": true, 00:28:00.501 "flush": true, 00:28:00.501 "reset": true, 00:28:00.501 "nvme_admin": false, 00:28:00.501 "nvme_io": false, 00:28:00.501 "nvme_io_md": false, 00:28:00.501 "write_zeroes": true, 00:28:00.501 "zcopy": true, 00:28:00.501 "get_zone_info": false, 00:28:00.502 "zone_management": false, 00:28:00.502 "zone_append": false, 00:28:00.502 "compare": false, 00:28:00.502 "compare_and_write": false, 00:28:00.502 "abort": true, 00:28:00.502 "seek_hole": false, 00:28:00.502 "seek_data": false, 00:28:00.502 "copy": true, 00:28:00.502 "nvme_iov_md": false 00:28:00.502 }, 00:28:00.502 "memory_domains": [ 00:28:00.502 { 00:28:00.502 "dma_device_id": "system", 00:28:00.502 "dma_device_type": 1 00:28:00.502 }, 00:28:00.502 { 00:28:00.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:00.502 "dma_device_type": 2 00:28:00.502 } 00:28:00.502 ], 00:28:00.502 "driver_specific": {} 00:28:00.502 } 00:28:00.502 ] 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.502 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:00.762 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.762 "name": "Existed_Raid", 00:28:00.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.762 "strip_size_kb": 0, 00:28:00.762 "state": "configuring", 00:28:00.762 "raid_level": "raid1", 00:28:00.762 "superblock": false, 00:28:00.762 "num_base_bdevs": 4, 00:28:00.762 "num_base_bdevs_discovered": 3, 00:28:00.762 "num_base_bdevs_operational": 4, 00:28:00.762 "base_bdevs_list": [ 00:28:00.762 { 00:28:00.762 "name": "BaseBdev1", 00:28:00.762 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:00.762 "is_configured": true, 00:28:00.762 "data_offset": 0, 00:28:00.762 "data_size": 65536 00:28:00.762 }, 00:28:00.762 { 00:28:00.762 "name": null, 00:28:00.762 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:00.762 "is_configured": false, 00:28:00.762 "data_offset": 0, 00:28:00.762 "data_size": 65536 00:28:00.762 }, 00:28:00.762 { 00:28:00.762 "name": "BaseBdev3", 00:28:00.762 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:00.762 "is_configured": true, 00:28:00.762 "data_offset": 0, 00:28:00.762 "data_size": 65536 00:28:00.762 }, 00:28:00.762 { 00:28:00.762 "name": "BaseBdev4", 00:28:00.762 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:00.762 "is_configured": true, 00:28:00.762 "data_offset": 0, 00:28:00.762 "data_size": 65536 00:28:00.762 } 00:28:00.762 ] 00:28:00.762 }' 00:28:00.762 08:54:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.762 08:54:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.698 08:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.698 08:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:01.698 08:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:01.698 08:54:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:01.956 [2024-07-12 08:54:37.038566] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.956 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.214 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.214 "name": "Existed_Raid", 00:28:02.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.214 "strip_size_kb": 0, 00:28:02.214 "state": "configuring", 00:28:02.214 "raid_level": "raid1", 00:28:02.214 "superblock": false, 00:28:02.214 "num_base_bdevs": 4, 00:28:02.214 "num_base_bdevs_discovered": 2, 00:28:02.214 "num_base_bdevs_operational": 4, 00:28:02.214 "base_bdevs_list": [ 00:28:02.214 { 00:28:02.214 "name": "BaseBdev1", 00:28:02.214 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:02.214 "is_configured": true, 00:28:02.214 "data_offset": 0, 00:28:02.214 "data_size": 65536 00:28:02.214 }, 00:28:02.214 { 00:28:02.214 "name": null, 00:28:02.214 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:02.214 "is_configured": false, 00:28:02.214 "data_offset": 0, 00:28:02.214 "data_size": 65536 00:28:02.214 }, 00:28:02.214 { 00:28:02.214 "name": null, 00:28:02.214 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:02.214 "is_configured": false, 00:28:02.214 "data_offset": 0, 00:28:02.214 "data_size": 65536 00:28:02.214 }, 00:28:02.214 { 00:28:02.214 "name": "BaseBdev4", 00:28:02.214 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:02.214 "is_configured": true, 00:28:02.214 "data_offset": 0, 00:28:02.214 "data_size": 65536 00:28:02.214 } 00:28:02.214 ] 00:28:02.214 }' 00:28:02.214 08:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.214 08:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.148 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.148 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:03.148 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:03.148 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:03.407 [2024-07-12 08:54:38.506918] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.407 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.665 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.665 "name": "Existed_Raid", 00:28:03.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.665 "strip_size_kb": 0, 00:28:03.665 "state": "configuring", 00:28:03.665 "raid_level": "raid1", 00:28:03.665 "superblock": false, 00:28:03.665 "num_base_bdevs": 4, 00:28:03.665 "num_base_bdevs_discovered": 3, 00:28:03.665 "num_base_bdevs_operational": 4, 00:28:03.665 "base_bdevs_list": [ 00:28:03.665 { 00:28:03.665 "name": "BaseBdev1", 00:28:03.665 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:03.665 "is_configured": true, 00:28:03.665 "data_offset": 0, 00:28:03.665 "data_size": 65536 00:28:03.665 }, 00:28:03.665 { 00:28:03.665 "name": null, 00:28:03.665 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:03.665 "is_configured": false, 00:28:03.665 "data_offset": 0, 00:28:03.665 "data_size": 65536 00:28:03.665 }, 00:28:03.665 { 00:28:03.665 "name": "BaseBdev3", 00:28:03.665 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:03.665 "is_configured": true, 00:28:03.665 "data_offset": 0, 00:28:03.665 "data_size": 65536 00:28:03.665 }, 00:28:03.665 { 00:28:03.665 "name": "BaseBdev4", 00:28:03.665 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:03.665 "is_configured": true, 00:28:03.665 "data_offset": 0, 00:28:03.665 "data_size": 65536 00:28:03.665 } 00:28:03.665 ] 00:28:03.665 }' 00:28:03.665 08:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.665 08:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.599 08:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.599 08:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:04.599 08:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:04.599 08:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:04.857 [2024-07-12 08:54:39.935265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.857 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.858 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.858 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.425 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.425 "name": "Existed_Raid", 00:28:05.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.425 "strip_size_kb": 0, 00:28:05.425 "state": "configuring", 00:28:05.425 "raid_level": "raid1", 00:28:05.425 "superblock": false, 00:28:05.425 "num_base_bdevs": 4, 00:28:05.425 "num_base_bdevs_discovered": 2, 00:28:05.425 "num_base_bdevs_operational": 4, 00:28:05.425 "base_bdevs_list": [ 00:28:05.425 { 00:28:05.425 "name": null, 00:28:05.425 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:05.425 "is_configured": false, 00:28:05.425 "data_offset": 0, 00:28:05.425 "data_size": 65536 00:28:05.425 }, 00:28:05.425 { 00:28:05.425 "name": null, 00:28:05.425 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:05.425 "is_configured": false, 00:28:05.425 "data_offset": 0, 00:28:05.425 "data_size": 65536 00:28:05.425 }, 00:28:05.425 { 00:28:05.425 "name": "BaseBdev3", 00:28:05.425 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:05.425 "is_configured": true, 00:28:05.425 "data_offset": 0, 00:28:05.425 "data_size": 65536 00:28:05.425 }, 00:28:05.425 { 00:28:05.425 "name": "BaseBdev4", 00:28:05.425 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:05.425 "is_configured": true, 00:28:05.425 "data_offset": 0, 00:28:05.425 "data_size": 65536 00:28:05.425 } 00:28:05.425 ] 00:28:05.425 }' 00:28:05.425 08:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.425 08:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.990 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.990 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:06.248 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:06.248 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:06.507 [2024-07-12 08:54:41.551198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.507 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.766 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.766 "name": "Existed_Raid", 00:28:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.766 "strip_size_kb": 0, 00:28:06.766 "state": "configuring", 00:28:06.766 "raid_level": "raid1", 00:28:06.766 "superblock": false, 00:28:06.766 "num_base_bdevs": 4, 00:28:06.766 "num_base_bdevs_discovered": 3, 00:28:06.766 "num_base_bdevs_operational": 4, 00:28:06.766 "base_bdevs_list": [ 00:28:06.766 { 00:28:06.766 "name": null, 00:28:06.766 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:06.766 "is_configured": false, 00:28:06.766 "data_offset": 0, 00:28:06.766 "data_size": 65536 00:28:06.766 }, 00:28:06.766 { 00:28:06.766 "name": "BaseBdev2", 00:28:06.766 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:06.766 "is_configured": true, 00:28:06.766 "data_offset": 0, 00:28:06.766 "data_size": 65536 00:28:06.766 }, 00:28:06.766 { 00:28:06.766 "name": "BaseBdev3", 00:28:06.766 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:06.766 "is_configured": true, 00:28:06.766 "data_offset": 0, 00:28:06.766 "data_size": 65536 00:28:06.766 }, 00:28:06.766 { 00:28:06.766 "name": "BaseBdev4", 00:28:06.766 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:06.766 "is_configured": true, 00:28:06.766 "data_offset": 0, 00:28:06.766 "data_size": 65536 00:28:06.766 } 00:28:06.766 ] 00:28:06.766 }' 00:28:06.766 08:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.766 08:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.334 08:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.334 08:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:07.593 08:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:07.593 08:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.593 08:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:07.852 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 63180d8e-3607-41f8-a19a-89984ccf986e 00:28:08.111 [2024-07-12 08:54:43.273497] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:08.111 [2024-07-12 08:54:43.273583] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:08.111 [2024-07-12 08:54:43.273595] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:08.111 [2024-07-12 08:54:43.273775] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:08.111 [2024-07-12 08:54:43.274185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:08.111 [2024-07-12 08:54:43.274210] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:28:08.111 [2024-07-12 08:54:43.274484] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:08.111 NewBaseBdev 00:28:08.111 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:08.111 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:28:08.111 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:08.111 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:28:08.111 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:08.111 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:08.111 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:08.372 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:08.632 [ 00:28:08.632 { 00:28:08.632 "name": "NewBaseBdev", 00:28:08.632 "aliases": [ 00:28:08.632 "63180d8e-3607-41f8-a19a-89984ccf986e" 00:28:08.632 ], 00:28:08.632 "product_name": "Malloc disk", 00:28:08.632 "block_size": 512, 00:28:08.632 "num_blocks": 65536, 00:28:08.632 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:08.632 "assigned_rate_limits": { 00:28:08.632 "rw_ios_per_sec": 0, 00:28:08.632 "rw_mbytes_per_sec": 0, 00:28:08.632 "r_mbytes_per_sec": 0, 00:28:08.632 "w_mbytes_per_sec": 0 00:28:08.632 }, 00:28:08.632 "claimed": true, 00:28:08.632 "claim_type": "exclusive_write", 00:28:08.632 "zoned": false, 00:28:08.632 "supported_io_types": { 00:28:08.632 "read": true, 00:28:08.632 "write": true, 00:28:08.632 "unmap": true, 00:28:08.632 "flush": true, 00:28:08.632 "reset": true, 00:28:08.632 "nvme_admin": false, 00:28:08.632 "nvme_io": false, 00:28:08.632 "nvme_io_md": false, 00:28:08.632 "write_zeroes": true, 00:28:08.632 "zcopy": true, 00:28:08.632 "get_zone_info": false, 00:28:08.632 "zone_management": false, 00:28:08.632 "zone_append": false, 00:28:08.632 "compare": false, 00:28:08.632 "compare_and_write": false, 00:28:08.632 "abort": true, 00:28:08.632 "seek_hole": false, 00:28:08.632 "seek_data": false, 00:28:08.632 "copy": true, 00:28:08.632 "nvme_iov_md": false 00:28:08.632 }, 00:28:08.632 "memory_domains": [ 00:28:08.632 { 00:28:08.632 "dma_device_id": "system", 00:28:08.632 "dma_device_type": 1 00:28:08.632 }, 00:28:08.632 { 00:28:08.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.632 "dma_device_type": 2 00:28:08.632 } 00:28:08.632 ], 00:28:08.632 "driver_specific": {} 00:28:08.632 } 00:28:08.632 ] 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:08.632 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.633 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.633 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.633 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.633 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.633 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.891 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.891 "name": "Existed_Raid", 00:28:08.891 "uuid": "44279ac6-4c76-4803-8af6-255a7de6622a", 00:28:08.891 "strip_size_kb": 0, 00:28:08.891 "state": "online", 00:28:08.891 "raid_level": "raid1", 00:28:08.891 "superblock": false, 00:28:08.891 "num_base_bdevs": 4, 00:28:08.891 "num_base_bdevs_discovered": 4, 00:28:08.891 "num_base_bdevs_operational": 4, 00:28:08.891 "base_bdevs_list": [ 00:28:08.891 { 00:28:08.891 "name": "NewBaseBdev", 00:28:08.891 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:08.891 "is_configured": true, 00:28:08.891 "data_offset": 0, 00:28:08.891 "data_size": 65536 00:28:08.891 }, 00:28:08.891 { 00:28:08.891 "name": "BaseBdev2", 00:28:08.891 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:08.891 "is_configured": true, 00:28:08.891 "data_offset": 0, 00:28:08.891 "data_size": 65536 00:28:08.891 }, 00:28:08.891 { 00:28:08.891 "name": "BaseBdev3", 00:28:08.891 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:08.891 "is_configured": true, 00:28:08.891 "data_offset": 0, 00:28:08.891 "data_size": 65536 00:28:08.891 }, 00:28:08.891 { 00:28:08.891 "name": "BaseBdev4", 00:28:08.891 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:08.891 "is_configured": true, 00:28:08.891 "data_offset": 0, 00:28:08.891 "data_size": 65536 00:28:08.891 } 00:28:08.891 ] 00:28:08.891 }' 00:28:08.891 08:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.891 08:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:09.460 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:09.719 [2024-07-12 08:54:44.906433] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.979 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:09.979 "name": "Existed_Raid", 00:28:09.979 "aliases": [ 00:28:09.979 "44279ac6-4c76-4803-8af6-255a7de6622a" 00:28:09.979 ], 00:28:09.979 "product_name": "Raid Volume", 00:28:09.979 "block_size": 512, 00:28:09.979 "num_blocks": 65536, 00:28:09.979 "uuid": "44279ac6-4c76-4803-8af6-255a7de6622a", 00:28:09.979 "assigned_rate_limits": { 00:28:09.979 "rw_ios_per_sec": 0, 00:28:09.979 "rw_mbytes_per_sec": 0, 00:28:09.979 "r_mbytes_per_sec": 0, 00:28:09.979 "w_mbytes_per_sec": 0 00:28:09.979 }, 00:28:09.979 "claimed": false, 00:28:09.979 "zoned": false, 00:28:09.979 "supported_io_types": { 00:28:09.979 "read": true, 00:28:09.979 "write": true, 00:28:09.979 "unmap": false, 00:28:09.979 "flush": false, 00:28:09.979 "reset": true, 00:28:09.979 "nvme_admin": false, 00:28:09.979 "nvme_io": false, 00:28:09.979 "nvme_io_md": false, 00:28:09.979 "write_zeroes": true, 00:28:09.979 "zcopy": false, 00:28:09.979 "get_zone_info": false, 00:28:09.979 "zone_management": false, 00:28:09.979 "zone_append": false, 00:28:09.979 "compare": false, 00:28:09.979 "compare_and_write": false, 00:28:09.979 "abort": false, 00:28:09.979 "seek_hole": false, 00:28:09.979 "seek_data": false, 00:28:09.979 "copy": false, 00:28:09.979 "nvme_iov_md": false 00:28:09.979 }, 00:28:09.979 "memory_domains": [ 00:28:09.979 { 00:28:09.979 "dma_device_id": "system", 00:28:09.979 "dma_device_type": 1 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.979 "dma_device_type": 2 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "dma_device_id": "system", 00:28:09.979 "dma_device_type": 1 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.979 "dma_device_type": 2 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "dma_device_id": "system", 00:28:09.979 "dma_device_type": 1 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.979 "dma_device_type": 2 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "dma_device_id": "system", 00:28:09.979 "dma_device_type": 1 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.979 "dma_device_type": 2 00:28:09.979 } 00:28:09.979 ], 00:28:09.979 "driver_specific": { 00:28:09.979 "raid": { 00:28:09.979 "uuid": "44279ac6-4c76-4803-8af6-255a7de6622a", 00:28:09.979 "strip_size_kb": 0, 00:28:09.979 "state": "online", 00:28:09.979 "raid_level": "raid1", 00:28:09.979 "superblock": false, 00:28:09.979 "num_base_bdevs": 4, 00:28:09.979 "num_base_bdevs_discovered": 4, 00:28:09.979 "num_base_bdevs_operational": 4, 00:28:09.979 "base_bdevs_list": [ 00:28:09.979 { 00:28:09.979 "name": "NewBaseBdev", 00:28:09.979 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:09.979 "is_configured": true, 00:28:09.979 "data_offset": 0, 00:28:09.979 "data_size": 65536 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "name": "BaseBdev2", 00:28:09.979 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:09.979 "is_configured": true, 00:28:09.979 "data_offset": 0, 00:28:09.979 "data_size": 65536 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "name": "BaseBdev3", 00:28:09.979 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:09.979 "is_configured": true, 00:28:09.979 "data_offset": 0, 00:28:09.979 "data_size": 65536 00:28:09.979 }, 00:28:09.979 { 00:28:09.979 "name": "BaseBdev4", 00:28:09.979 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:09.979 "is_configured": true, 00:28:09.979 "data_offset": 0, 00:28:09.979 "data_size": 65536 00:28:09.979 } 00:28:09.979 ] 00:28:09.979 } 00:28:09.979 } 00:28:09.979 }' 00:28:09.979 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.979 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:09.979 BaseBdev2 00:28:09.979 BaseBdev3 00:28:09.979 BaseBdev4' 00:28:09.979 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:09.979 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:09.979 08:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.239 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.239 "name": "NewBaseBdev", 00:28:10.239 "aliases": [ 00:28:10.239 "63180d8e-3607-41f8-a19a-89984ccf986e" 00:28:10.239 ], 00:28:10.239 "product_name": "Malloc disk", 00:28:10.239 "block_size": 512, 00:28:10.239 "num_blocks": 65536, 00:28:10.239 "uuid": "63180d8e-3607-41f8-a19a-89984ccf986e", 00:28:10.239 "assigned_rate_limits": { 00:28:10.239 "rw_ios_per_sec": 0, 00:28:10.239 "rw_mbytes_per_sec": 0, 00:28:10.239 "r_mbytes_per_sec": 0, 00:28:10.239 "w_mbytes_per_sec": 0 00:28:10.239 }, 00:28:10.239 "claimed": true, 00:28:10.239 "claim_type": "exclusive_write", 00:28:10.239 "zoned": false, 00:28:10.239 "supported_io_types": { 00:28:10.239 "read": true, 00:28:10.239 "write": true, 00:28:10.239 "unmap": true, 00:28:10.239 "flush": true, 00:28:10.239 "reset": true, 00:28:10.239 "nvme_admin": false, 00:28:10.239 "nvme_io": false, 00:28:10.239 "nvme_io_md": false, 00:28:10.239 "write_zeroes": true, 00:28:10.239 "zcopy": true, 00:28:10.239 "get_zone_info": false, 00:28:10.239 "zone_management": false, 00:28:10.239 "zone_append": false, 00:28:10.239 "compare": false, 00:28:10.239 "compare_and_write": false, 00:28:10.239 "abort": true, 00:28:10.239 "seek_hole": false, 00:28:10.239 "seek_data": false, 00:28:10.239 "copy": true, 00:28:10.239 "nvme_iov_md": false 00:28:10.239 }, 00:28:10.239 "memory_domains": [ 00:28:10.239 { 00:28:10.239 "dma_device_id": "system", 00:28:10.239 "dma_device_type": 1 00:28:10.239 }, 00:28:10.239 { 00:28:10.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.239 "dma_device_type": 2 00:28:10.239 } 00:28:10.239 ], 00:28:10.239 "driver_specific": {} 00:28:10.239 }' 00:28:10.239 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.239 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.239 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:10.239 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.239 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:10.499 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.758 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.758 "name": "BaseBdev2", 00:28:10.758 "aliases": [ 00:28:10.758 "78e5c561-75d2-4f1b-ad92-b031fd7337be" 00:28:10.758 ], 00:28:10.758 "product_name": "Malloc disk", 00:28:10.758 "block_size": 512, 00:28:10.758 "num_blocks": 65536, 00:28:10.758 "uuid": "78e5c561-75d2-4f1b-ad92-b031fd7337be", 00:28:10.758 "assigned_rate_limits": { 00:28:10.758 "rw_ios_per_sec": 0, 00:28:10.758 "rw_mbytes_per_sec": 0, 00:28:10.758 "r_mbytes_per_sec": 0, 00:28:10.758 "w_mbytes_per_sec": 0 00:28:10.758 }, 00:28:10.758 "claimed": true, 00:28:10.758 "claim_type": "exclusive_write", 00:28:10.758 "zoned": false, 00:28:10.758 "supported_io_types": { 00:28:10.758 "read": true, 00:28:10.758 "write": true, 00:28:10.758 "unmap": true, 00:28:10.758 "flush": true, 00:28:10.758 "reset": true, 00:28:10.758 "nvme_admin": false, 00:28:10.758 "nvme_io": false, 00:28:10.758 "nvme_io_md": false, 00:28:10.758 "write_zeroes": true, 00:28:10.758 "zcopy": true, 00:28:10.758 "get_zone_info": false, 00:28:10.758 "zone_management": false, 00:28:10.758 "zone_append": false, 00:28:10.758 "compare": false, 00:28:10.758 "compare_and_write": false, 00:28:10.758 "abort": true, 00:28:10.758 "seek_hole": false, 00:28:10.758 "seek_data": false, 00:28:10.758 "copy": true, 00:28:10.758 "nvme_iov_md": false 00:28:10.758 }, 00:28:10.758 "memory_domains": [ 00:28:10.758 { 00:28:10.758 "dma_device_id": "system", 00:28:10.758 "dma_device_type": 1 00:28:10.758 }, 00:28:10.758 { 00:28:10.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.758 "dma_device_type": 2 00:28:10.758 } 00:28:10.758 ], 00:28:10.758 "driver_specific": {} 00:28:10.758 }' 00:28:11.017 08:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.017 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.017 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:11.017 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.017 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.017 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:11.017 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:11.276 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:11.535 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:11.535 "name": "BaseBdev3", 00:28:11.535 "aliases": [ 00:28:11.535 "ba3900f0-96eb-4489-9c67-d74aa24a4fbd" 00:28:11.535 ], 00:28:11.535 "product_name": "Malloc disk", 00:28:11.535 "block_size": 512, 00:28:11.535 "num_blocks": 65536, 00:28:11.535 "uuid": "ba3900f0-96eb-4489-9c67-d74aa24a4fbd", 00:28:11.535 "assigned_rate_limits": { 00:28:11.535 "rw_ios_per_sec": 0, 00:28:11.535 "rw_mbytes_per_sec": 0, 00:28:11.535 "r_mbytes_per_sec": 0, 00:28:11.535 "w_mbytes_per_sec": 0 00:28:11.535 }, 00:28:11.535 "claimed": true, 00:28:11.535 "claim_type": "exclusive_write", 00:28:11.535 "zoned": false, 00:28:11.535 "supported_io_types": { 00:28:11.535 "read": true, 00:28:11.535 "write": true, 00:28:11.535 "unmap": true, 00:28:11.535 "flush": true, 00:28:11.535 "reset": true, 00:28:11.535 "nvme_admin": false, 00:28:11.535 "nvme_io": false, 00:28:11.535 "nvme_io_md": false, 00:28:11.535 "write_zeroes": true, 00:28:11.535 "zcopy": true, 00:28:11.535 "get_zone_info": false, 00:28:11.535 "zone_management": false, 00:28:11.535 "zone_append": false, 00:28:11.535 "compare": false, 00:28:11.535 "compare_and_write": false, 00:28:11.535 "abort": true, 00:28:11.535 "seek_hole": false, 00:28:11.535 "seek_data": false, 00:28:11.535 "copy": true, 00:28:11.535 "nvme_iov_md": false 00:28:11.535 }, 00:28:11.535 "memory_domains": [ 00:28:11.535 { 00:28:11.535 "dma_device_id": "system", 00:28:11.535 "dma_device_type": 1 00:28:11.535 }, 00:28:11.535 { 00:28:11.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.535 "dma_device_type": 2 00:28:11.535 } 00:28:11.535 ], 00:28:11.535 "driver_specific": {} 00:28:11.535 }' 00:28:11.535 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.535 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:11.795 08:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.054 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.054 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:12.054 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:12.054 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:12.054 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:12.312 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:12.313 "name": "BaseBdev4", 00:28:12.313 "aliases": [ 00:28:12.313 "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4" 00:28:12.313 ], 00:28:12.313 "product_name": "Malloc disk", 00:28:12.313 "block_size": 512, 00:28:12.313 "num_blocks": 65536, 00:28:12.313 "uuid": "c1fa8c78-ee6c-40b3-883c-fa8ebfdbc5f4", 00:28:12.313 "assigned_rate_limits": { 00:28:12.313 "rw_ios_per_sec": 0, 00:28:12.313 "rw_mbytes_per_sec": 0, 00:28:12.313 "r_mbytes_per_sec": 0, 00:28:12.313 "w_mbytes_per_sec": 0 00:28:12.313 }, 00:28:12.313 "claimed": true, 00:28:12.313 "claim_type": "exclusive_write", 00:28:12.313 "zoned": false, 00:28:12.313 "supported_io_types": { 00:28:12.313 "read": true, 00:28:12.313 "write": true, 00:28:12.313 "unmap": true, 00:28:12.313 "flush": true, 00:28:12.313 "reset": true, 00:28:12.313 "nvme_admin": false, 00:28:12.313 "nvme_io": false, 00:28:12.313 "nvme_io_md": false, 00:28:12.313 "write_zeroes": true, 00:28:12.313 "zcopy": true, 00:28:12.313 "get_zone_info": false, 00:28:12.313 "zone_management": false, 00:28:12.313 "zone_append": false, 00:28:12.313 "compare": false, 00:28:12.313 "compare_and_write": false, 00:28:12.313 "abort": true, 00:28:12.313 "seek_hole": false, 00:28:12.313 "seek_data": false, 00:28:12.313 "copy": true, 00:28:12.313 "nvme_iov_md": false 00:28:12.313 }, 00:28:12.313 "memory_domains": [ 00:28:12.313 { 00:28:12.313 "dma_device_id": "system", 00:28:12.313 "dma_device_type": 1 00:28:12.313 }, 00:28:12.313 { 00:28:12.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.313 "dma_device_type": 2 00:28:12.313 } 00:28:12.313 ], 00:28:12.313 "driver_specific": {} 00:28:12.313 }' 00:28:12.313 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.313 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.313 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:12.313 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.572 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.572 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:12.572 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.572 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.572 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:12.572 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.572 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.831 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:12.831 08:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:13.090 [2024-07-12 08:54:48.038737] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:13.090 [2024-07-12 08:54:48.039145] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:13.090 [2024-07-12 08:54:48.039406] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:13.090 [2024-07-12 08:54:48.039889] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:13.090 [2024-07-12 08:54:48.040024] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 142461 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 142461 ']' 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 142461 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142461 00:28:13.090 killing process with pid 142461 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142461' 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 142461 00:28:13.090 08:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 142461 00:28:13.090 [2024-07-12 08:54:48.074589] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:13.349 [2024-07-12 08:54:48.422150] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:14.727 ************************************ 00:28:14.727 END TEST raid_state_function_test 00:28:14.727 ************************************ 00:28:14.727 08:54:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:28:14.727 00:28:14.727 real 0m37.427s 00:28:14.727 user 1m10.100s 00:28:14.727 sys 0m4.116s 00:28:14.727 08:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.728 08:54:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:14.728 08:54:49 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:28:14.728 08:54:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:28:14.728 08:54:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.728 08:54:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:14.728 ************************************ 00:28:14.728 START TEST raid_state_function_test_sb 00:28:14.728 ************************************ 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=143661 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 143661' 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:14.728 Process raid pid: 143661 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 143661 /var/tmp/spdk-raid.sock 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 143661 ']' 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:14.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.728 08:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.728 [2024-07-12 08:54:49.749100] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:28:14.728 [2024-07-12 08:54:49.749617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.728 [2024-07-12 08:54:49.915938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.987 [2024-07-12 08:54:50.149650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.246 [2024-07-12 08:54:50.336636] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:15.814 [2024-07-12 08:54:50.936003] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:15.814 [2024-07-12 08:54:50.938630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:15.814 [2024-07-12 08:54:50.938811] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:15.814 [2024-07-12 08:54:50.938881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:15.814 [2024-07-12 08:54:50.938991] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:15.814 [2024-07-12 08:54:50.939047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:15.814 [2024-07-12 08:54:50.939156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:15.814 [2024-07-12 08:54:50.939216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.814 08:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.072 08:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.073 "name": "Existed_Raid", 00:28:16.073 "uuid": "d6ffaf25-a408-46b3-a22e-d2bd729ad642", 00:28:16.073 "strip_size_kb": 0, 00:28:16.073 "state": "configuring", 00:28:16.073 "raid_level": "raid1", 00:28:16.073 "superblock": true, 00:28:16.073 "num_base_bdevs": 4, 00:28:16.073 "num_base_bdevs_discovered": 0, 00:28:16.073 "num_base_bdevs_operational": 4, 00:28:16.073 "base_bdevs_list": [ 00:28:16.073 { 00:28:16.073 "name": "BaseBdev1", 00:28:16.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.073 "is_configured": false, 00:28:16.073 "data_offset": 0, 00:28:16.073 "data_size": 0 00:28:16.073 }, 00:28:16.073 { 00:28:16.073 "name": "BaseBdev2", 00:28:16.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.073 "is_configured": false, 00:28:16.073 "data_offset": 0, 00:28:16.073 "data_size": 0 00:28:16.073 }, 00:28:16.073 { 00:28:16.073 "name": "BaseBdev3", 00:28:16.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.073 "is_configured": false, 00:28:16.073 "data_offset": 0, 00:28:16.073 "data_size": 0 00:28:16.073 }, 00:28:16.073 { 00:28:16.073 "name": "BaseBdev4", 00:28:16.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.073 "is_configured": false, 00:28:16.073 "data_offset": 0, 00:28:16.073 "data_size": 0 00:28:16.073 } 00:28:16.073 ] 00:28:16.073 }' 00:28:16.073 08:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.073 08:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.008 08:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:17.267 [2024-07-12 08:54:52.231936] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:17.267 [2024-07-12 08:54:52.232457] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:28:17.267 08:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:17.267 [2024-07-12 08:54:52.452145] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:17.267 [2024-07-12 08:54:52.452556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:17.267 [2024-07-12 08:54:52.452712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:17.267 [2024-07-12 08:54:52.452843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:17.267 [2024-07-12 08:54:52.453013] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:17.267 [2024-07-12 08:54:52.453108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:17.267 [2024-07-12 08:54:52.453353] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:17.267 [2024-07-12 08:54:52.453429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:17.525 08:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:17.784 [2024-07-12 08:54:52.759723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:17.784 BaseBdev1 00:28:17.784 08:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:17.784 08:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:28:17.784 08:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:17.784 08:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:17.784 08:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:17.784 08:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:17.784 08:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:18.042 08:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:18.042 [ 00:28:18.042 { 00:28:18.042 "name": "BaseBdev1", 00:28:18.042 "aliases": [ 00:28:18.042 "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f" 00:28:18.042 ], 00:28:18.042 "product_name": "Malloc disk", 00:28:18.042 "block_size": 512, 00:28:18.042 "num_blocks": 65536, 00:28:18.042 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:18.042 "assigned_rate_limits": { 00:28:18.042 "rw_ios_per_sec": 0, 00:28:18.042 "rw_mbytes_per_sec": 0, 00:28:18.042 "r_mbytes_per_sec": 0, 00:28:18.042 "w_mbytes_per_sec": 0 00:28:18.042 }, 00:28:18.042 "claimed": true, 00:28:18.042 "claim_type": "exclusive_write", 00:28:18.042 "zoned": false, 00:28:18.042 "supported_io_types": { 00:28:18.042 "read": true, 00:28:18.042 "write": true, 00:28:18.042 "unmap": true, 00:28:18.042 "flush": true, 00:28:18.042 "reset": true, 00:28:18.042 "nvme_admin": false, 00:28:18.042 "nvme_io": false, 00:28:18.042 "nvme_io_md": false, 00:28:18.042 "write_zeroes": true, 00:28:18.042 "zcopy": true, 00:28:18.042 "get_zone_info": false, 00:28:18.042 "zone_management": false, 00:28:18.042 "zone_append": false, 00:28:18.042 "compare": false, 00:28:18.042 "compare_and_write": false, 00:28:18.042 "abort": true, 00:28:18.042 "seek_hole": false, 00:28:18.042 "seek_data": false, 00:28:18.042 "copy": true, 00:28:18.042 "nvme_iov_md": false 00:28:18.042 }, 00:28:18.042 "memory_domains": [ 00:28:18.042 { 00:28:18.042 "dma_device_id": "system", 00:28:18.042 "dma_device_type": 1 00:28:18.042 }, 00:28:18.042 { 00:28:18.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.042 "dma_device_type": 2 00:28:18.042 } 00:28:18.042 ], 00:28:18.042 "driver_specific": {} 00:28:18.042 } 00:28:18.042 ] 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.042 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:18.300 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:18.300 "name": "Existed_Raid", 00:28:18.300 "uuid": "79688596-2b3f-40fb-89c9-9a2104a674e9", 00:28:18.300 "strip_size_kb": 0, 00:28:18.301 "state": "configuring", 00:28:18.301 "raid_level": "raid1", 00:28:18.301 "superblock": true, 00:28:18.301 "num_base_bdevs": 4, 00:28:18.301 "num_base_bdevs_discovered": 1, 00:28:18.301 "num_base_bdevs_operational": 4, 00:28:18.301 "base_bdevs_list": [ 00:28:18.301 { 00:28:18.301 "name": "BaseBdev1", 00:28:18.301 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:18.301 "is_configured": true, 00:28:18.301 "data_offset": 2048, 00:28:18.301 "data_size": 63488 00:28:18.301 }, 00:28:18.301 { 00:28:18.301 "name": "BaseBdev2", 00:28:18.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.301 "is_configured": false, 00:28:18.301 "data_offset": 0, 00:28:18.301 "data_size": 0 00:28:18.301 }, 00:28:18.301 { 00:28:18.301 "name": "BaseBdev3", 00:28:18.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.301 "is_configured": false, 00:28:18.301 "data_offset": 0, 00:28:18.301 "data_size": 0 00:28:18.301 }, 00:28:18.301 { 00:28:18.301 "name": "BaseBdev4", 00:28:18.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.301 "is_configured": false, 00:28:18.301 "data_offset": 0, 00:28:18.301 "data_size": 0 00:28:18.301 } 00:28:18.301 ] 00:28:18.301 }' 00:28:18.301 08:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:18.301 08:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.312 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:19.312 [2024-07-12 08:54:54.384191] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:19.312 [2024-07-12 08:54:54.384660] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:28:19.312 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:19.570 [2024-07-12 08:54:54.600330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:19.570 [2024-07-12 08:54:54.603069] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:19.570 [2024-07-12 08:54:54.603304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:19.570 [2024-07-12 08:54:54.603413] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:19.570 [2024-07-12 08:54:54.603478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:19.570 [2024-07-12 08:54:54.603569] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:19.570 [2024-07-12 08:54:54.603696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.570 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.829 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.829 "name": "Existed_Raid", 00:28:19.829 "uuid": "0a8fc784-efb9-4234-b9fb-652c089a7ba5", 00:28:19.829 "strip_size_kb": 0, 00:28:19.829 "state": "configuring", 00:28:19.829 "raid_level": "raid1", 00:28:19.829 "superblock": true, 00:28:19.829 "num_base_bdevs": 4, 00:28:19.829 "num_base_bdevs_discovered": 1, 00:28:19.829 "num_base_bdevs_operational": 4, 00:28:19.829 "base_bdevs_list": [ 00:28:19.829 { 00:28:19.829 "name": "BaseBdev1", 00:28:19.829 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:19.829 "is_configured": true, 00:28:19.829 "data_offset": 2048, 00:28:19.829 "data_size": 63488 00:28:19.829 }, 00:28:19.829 { 00:28:19.829 "name": "BaseBdev2", 00:28:19.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.829 "is_configured": false, 00:28:19.829 "data_offset": 0, 00:28:19.829 "data_size": 0 00:28:19.829 }, 00:28:19.829 { 00:28:19.829 "name": "BaseBdev3", 00:28:19.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.829 "is_configured": false, 00:28:19.829 "data_offset": 0, 00:28:19.829 "data_size": 0 00:28:19.829 }, 00:28:19.829 { 00:28:19.829 "name": "BaseBdev4", 00:28:19.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.829 "is_configured": false, 00:28:19.829 "data_offset": 0, 00:28:19.829 "data_size": 0 00:28:19.829 } 00:28:19.829 ] 00:28:19.829 }' 00:28:19.829 08:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.829 08:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.396 08:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:20.659 [2024-07-12 08:54:55.835916] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:20.659 BaseBdev2 00:28:20.659 08:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:20.659 08:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:20.659 08:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:20.659 08:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:20.659 08:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:20.659 08:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:20.659 08:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:20.918 08:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:21.176 [ 00:28:21.176 { 00:28:21.176 "name": "BaseBdev2", 00:28:21.176 "aliases": [ 00:28:21.176 "57925c43-1077-4fb0-9f2f-f1ed10514dbd" 00:28:21.176 ], 00:28:21.176 "product_name": "Malloc disk", 00:28:21.176 "block_size": 512, 00:28:21.176 "num_blocks": 65536, 00:28:21.176 "uuid": "57925c43-1077-4fb0-9f2f-f1ed10514dbd", 00:28:21.176 "assigned_rate_limits": { 00:28:21.176 "rw_ios_per_sec": 0, 00:28:21.176 "rw_mbytes_per_sec": 0, 00:28:21.176 "r_mbytes_per_sec": 0, 00:28:21.176 "w_mbytes_per_sec": 0 00:28:21.176 }, 00:28:21.176 "claimed": true, 00:28:21.176 "claim_type": "exclusive_write", 00:28:21.176 "zoned": false, 00:28:21.176 "supported_io_types": { 00:28:21.176 "read": true, 00:28:21.176 "write": true, 00:28:21.176 "unmap": true, 00:28:21.176 "flush": true, 00:28:21.176 "reset": true, 00:28:21.176 "nvme_admin": false, 00:28:21.176 "nvme_io": false, 00:28:21.176 "nvme_io_md": false, 00:28:21.176 "write_zeroes": true, 00:28:21.176 "zcopy": true, 00:28:21.176 "get_zone_info": false, 00:28:21.176 "zone_management": false, 00:28:21.176 "zone_append": false, 00:28:21.176 "compare": false, 00:28:21.176 "compare_and_write": false, 00:28:21.176 "abort": true, 00:28:21.176 "seek_hole": false, 00:28:21.176 "seek_data": false, 00:28:21.176 "copy": true, 00:28:21.176 "nvme_iov_md": false 00:28:21.176 }, 00:28:21.176 "memory_domains": [ 00:28:21.176 { 00:28:21.176 "dma_device_id": "system", 00:28:21.176 "dma_device_type": 1 00:28:21.176 }, 00:28:21.176 { 00:28:21.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:21.176 "dma_device_type": 2 00:28:21.176 } 00:28:21.176 ], 00:28:21.176 "driver_specific": {} 00:28:21.176 } 00:28:21.176 ] 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:21.176 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:21.177 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:21.177 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:21.177 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:21.177 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:21.177 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:21.177 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.177 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.435 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.435 "name": "Existed_Raid", 00:28:21.435 "uuid": "0a8fc784-efb9-4234-b9fb-652c089a7ba5", 00:28:21.435 "strip_size_kb": 0, 00:28:21.435 "state": "configuring", 00:28:21.435 "raid_level": "raid1", 00:28:21.435 "superblock": true, 00:28:21.435 "num_base_bdevs": 4, 00:28:21.435 "num_base_bdevs_discovered": 2, 00:28:21.435 "num_base_bdevs_operational": 4, 00:28:21.435 "base_bdevs_list": [ 00:28:21.435 { 00:28:21.435 "name": "BaseBdev1", 00:28:21.435 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:21.435 "is_configured": true, 00:28:21.435 "data_offset": 2048, 00:28:21.435 "data_size": 63488 00:28:21.435 }, 00:28:21.435 { 00:28:21.435 "name": "BaseBdev2", 00:28:21.435 "uuid": "57925c43-1077-4fb0-9f2f-f1ed10514dbd", 00:28:21.435 "is_configured": true, 00:28:21.435 "data_offset": 2048, 00:28:21.435 "data_size": 63488 00:28:21.435 }, 00:28:21.435 { 00:28:21.435 "name": "BaseBdev3", 00:28:21.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.435 "is_configured": false, 00:28:21.435 "data_offset": 0, 00:28:21.435 "data_size": 0 00:28:21.435 }, 00:28:21.435 { 00:28:21.435 "name": "BaseBdev4", 00:28:21.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.435 "is_configured": false, 00:28:21.435 "data_offset": 0, 00:28:21.435 "data_size": 0 00:28:21.435 } 00:28:21.435 ] 00:28:21.435 }' 00:28:21.435 08:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.435 08:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:22.371 [2024-07-12 08:54:57.524209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:22.371 BaseBdev3 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:22.371 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:22.629 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:22.888 [ 00:28:22.888 { 00:28:22.888 "name": "BaseBdev3", 00:28:22.888 "aliases": [ 00:28:22.888 "49034d23-d844-4f26-b1f6-89ce28ba16b9" 00:28:22.888 ], 00:28:22.888 "product_name": "Malloc disk", 00:28:22.888 "block_size": 512, 00:28:22.888 "num_blocks": 65536, 00:28:22.888 "uuid": "49034d23-d844-4f26-b1f6-89ce28ba16b9", 00:28:22.888 "assigned_rate_limits": { 00:28:22.888 "rw_ios_per_sec": 0, 00:28:22.888 "rw_mbytes_per_sec": 0, 00:28:22.888 "r_mbytes_per_sec": 0, 00:28:22.888 "w_mbytes_per_sec": 0 00:28:22.888 }, 00:28:22.888 "claimed": true, 00:28:22.888 "claim_type": "exclusive_write", 00:28:22.888 "zoned": false, 00:28:22.888 "supported_io_types": { 00:28:22.888 "read": true, 00:28:22.888 "write": true, 00:28:22.888 "unmap": true, 00:28:22.888 "flush": true, 00:28:22.888 "reset": true, 00:28:22.888 "nvme_admin": false, 00:28:22.888 "nvme_io": false, 00:28:22.888 "nvme_io_md": false, 00:28:22.888 "write_zeroes": true, 00:28:22.888 "zcopy": true, 00:28:22.888 "get_zone_info": false, 00:28:22.888 "zone_management": false, 00:28:22.888 "zone_append": false, 00:28:22.888 "compare": false, 00:28:22.888 "compare_and_write": false, 00:28:22.888 "abort": true, 00:28:22.888 "seek_hole": false, 00:28:22.888 "seek_data": false, 00:28:22.888 "copy": true, 00:28:22.888 "nvme_iov_md": false 00:28:22.888 }, 00:28:22.888 "memory_domains": [ 00:28:22.888 { 00:28:22.888 "dma_device_id": "system", 00:28:22.888 "dma_device_type": 1 00:28:22.888 }, 00:28:22.888 { 00:28:22.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.888 "dma_device_type": 2 00:28:22.888 } 00:28:22.888 ], 00:28:22.888 "driver_specific": {} 00:28:22.888 } 00:28:22.888 ] 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:22.888 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.889 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.889 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.889 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.889 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.889 08:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:23.148 08:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.148 "name": "Existed_Raid", 00:28:23.148 "uuid": "0a8fc784-efb9-4234-b9fb-652c089a7ba5", 00:28:23.148 "strip_size_kb": 0, 00:28:23.148 "state": "configuring", 00:28:23.148 "raid_level": "raid1", 00:28:23.148 "superblock": true, 00:28:23.148 "num_base_bdevs": 4, 00:28:23.148 "num_base_bdevs_discovered": 3, 00:28:23.148 "num_base_bdevs_operational": 4, 00:28:23.148 "base_bdevs_list": [ 00:28:23.148 { 00:28:23.148 "name": "BaseBdev1", 00:28:23.148 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:23.148 "is_configured": true, 00:28:23.148 "data_offset": 2048, 00:28:23.148 "data_size": 63488 00:28:23.148 }, 00:28:23.148 { 00:28:23.148 "name": "BaseBdev2", 00:28:23.148 "uuid": "57925c43-1077-4fb0-9f2f-f1ed10514dbd", 00:28:23.148 "is_configured": true, 00:28:23.148 "data_offset": 2048, 00:28:23.148 "data_size": 63488 00:28:23.148 }, 00:28:23.148 { 00:28:23.148 "name": "BaseBdev3", 00:28:23.148 "uuid": "49034d23-d844-4f26-b1f6-89ce28ba16b9", 00:28:23.148 "is_configured": true, 00:28:23.148 "data_offset": 2048, 00:28:23.148 "data_size": 63488 00:28:23.148 }, 00:28:23.148 { 00:28:23.148 "name": "BaseBdev4", 00:28:23.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.148 "is_configured": false, 00:28:23.148 "data_offset": 0, 00:28:23.148 "data_size": 0 00:28:23.148 } 00:28:23.148 ] 00:28:23.148 }' 00:28:23.148 08:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.148 08:54:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:24.084 08:54:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:24.084 [2024-07-12 08:54:59.261727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:24.084 [2024-07-12 08:54:59.262446] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:28:24.084 [2024-07-12 08:54:59.262607] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:24.084 [2024-07-12 08:54:59.262805] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:28:24.084 BaseBdev4 00:28:24.085 [2024-07-12 08:54:59.263364] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:28:24.085 [2024-07-12 08:54:59.263512] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:28:24.085 [2024-07-12 08:54:59.263769] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.085 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:28:24.085 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:28:24.085 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:24.085 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:24.085 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:24.085 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:24.085 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:24.344 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:24.603 [ 00:28:24.603 { 00:28:24.603 "name": "BaseBdev4", 00:28:24.603 "aliases": [ 00:28:24.603 "5f380976-7fd4-4755-8aee-485108fca646" 00:28:24.603 ], 00:28:24.603 "product_name": "Malloc disk", 00:28:24.603 "block_size": 512, 00:28:24.603 "num_blocks": 65536, 00:28:24.603 "uuid": "5f380976-7fd4-4755-8aee-485108fca646", 00:28:24.603 "assigned_rate_limits": { 00:28:24.603 "rw_ios_per_sec": 0, 00:28:24.603 "rw_mbytes_per_sec": 0, 00:28:24.603 "r_mbytes_per_sec": 0, 00:28:24.603 "w_mbytes_per_sec": 0 00:28:24.603 }, 00:28:24.603 "claimed": true, 00:28:24.603 "claim_type": "exclusive_write", 00:28:24.603 "zoned": false, 00:28:24.603 "supported_io_types": { 00:28:24.603 "read": true, 00:28:24.603 "write": true, 00:28:24.603 "unmap": true, 00:28:24.603 "flush": true, 00:28:24.603 "reset": true, 00:28:24.603 "nvme_admin": false, 00:28:24.603 "nvme_io": false, 00:28:24.603 "nvme_io_md": false, 00:28:24.603 "write_zeroes": true, 00:28:24.603 "zcopy": true, 00:28:24.603 "get_zone_info": false, 00:28:24.603 "zone_management": false, 00:28:24.603 "zone_append": false, 00:28:24.603 "compare": false, 00:28:24.603 "compare_and_write": false, 00:28:24.603 "abort": true, 00:28:24.603 "seek_hole": false, 00:28:24.603 "seek_data": false, 00:28:24.603 "copy": true, 00:28:24.603 "nvme_iov_md": false 00:28:24.603 }, 00:28:24.603 "memory_domains": [ 00:28:24.603 { 00:28:24.603 "dma_device_id": "system", 00:28:24.603 "dma_device_type": 1 00:28:24.603 }, 00:28:24.603 { 00:28:24.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:24.603 "dma_device_type": 2 00:28:24.603 } 00:28:24.603 ], 00:28:24.603 "driver_specific": {} 00:28:24.603 } 00:28:24.603 ] 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.603 08:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:24.862 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.862 "name": "Existed_Raid", 00:28:24.862 "uuid": "0a8fc784-efb9-4234-b9fb-652c089a7ba5", 00:28:24.862 "strip_size_kb": 0, 00:28:24.862 "state": "online", 00:28:24.862 "raid_level": "raid1", 00:28:24.862 "superblock": true, 00:28:24.862 "num_base_bdevs": 4, 00:28:24.862 "num_base_bdevs_discovered": 4, 00:28:24.862 "num_base_bdevs_operational": 4, 00:28:24.862 "base_bdevs_list": [ 00:28:24.862 { 00:28:24.862 "name": "BaseBdev1", 00:28:24.862 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:24.862 "is_configured": true, 00:28:24.862 "data_offset": 2048, 00:28:24.862 "data_size": 63488 00:28:24.862 }, 00:28:24.862 { 00:28:24.862 "name": "BaseBdev2", 00:28:24.862 "uuid": "57925c43-1077-4fb0-9f2f-f1ed10514dbd", 00:28:24.862 "is_configured": true, 00:28:24.862 "data_offset": 2048, 00:28:24.862 "data_size": 63488 00:28:24.862 }, 00:28:24.862 { 00:28:24.862 "name": "BaseBdev3", 00:28:24.862 "uuid": "49034d23-d844-4f26-b1f6-89ce28ba16b9", 00:28:24.862 "is_configured": true, 00:28:24.862 "data_offset": 2048, 00:28:24.862 "data_size": 63488 00:28:24.863 }, 00:28:24.863 { 00:28:24.863 "name": "BaseBdev4", 00:28:24.863 "uuid": "5f380976-7fd4-4755-8aee-485108fca646", 00:28:24.863 "is_configured": true, 00:28:24.863 "data_offset": 2048, 00:28:24.863 "data_size": 63488 00:28:24.863 } 00:28:24.863 ] 00:28:24.863 }' 00:28:24.863 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.863 08:55:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.799 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:25.799 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:25.800 [2024-07-12 08:55:00.946734] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:25.800 "name": "Existed_Raid", 00:28:25.800 "aliases": [ 00:28:25.800 "0a8fc784-efb9-4234-b9fb-652c089a7ba5" 00:28:25.800 ], 00:28:25.800 "product_name": "Raid Volume", 00:28:25.800 "block_size": 512, 00:28:25.800 "num_blocks": 63488, 00:28:25.800 "uuid": "0a8fc784-efb9-4234-b9fb-652c089a7ba5", 00:28:25.800 "assigned_rate_limits": { 00:28:25.800 "rw_ios_per_sec": 0, 00:28:25.800 "rw_mbytes_per_sec": 0, 00:28:25.800 "r_mbytes_per_sec": 0, 00:28:25.800 "w_mbytes_per_sec": 0 00:28:25.800 }, 00:28:25.800 "claimed": false, 00:28:25.800 "zoned": false, 00:28:25.800 "supported_io_types": { 00:28:25.800 "read": true, 00:28:25.800 "write": true, 00:28:25.800 "unmap": false, 00:28:25.800 "flush": false, 00:28:25.800 "reset": true, 00:28:25.800 "nvme_admin": false, 00:28:25.800 "nvme_io": false, 00:28:25.800 "nvme_io_md": false, 00:28:25.800 "write_zeroes": true, 00:28:25.800 "zcopy": false, 00:28:25.800 "get_zone_info": false, 00:28:25.800 "zone_management": false, 00:28:25.800 "zone_append": false, 00:28:25.800 "compare": false, 00:28:25.800 "compare_and_write": false, 00:28:25.800 "abort": false, 00:28:25.800 "seek_hole": false, 00:28:25.800 "seek_data": false, 00:28:25.800 "copy": false, 00:28:25.800 "nvme_iov_md": false 00:28:25.800 }, 00:28:25.800 "memory_domains": [ 00:28:25.800 { 00:28:25.800 "dma_device_id": "system", 00:28:25.800 "dma_device_type": 1 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.800 "dma_device_type": 2 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "dma_device_id": "system", 00:28:25.800 "dma_device_type": 1 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.800 "dma_device_type": 2 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "dma_device_id": "system", 00:28:25.800 "dma_device_type": 1 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.800 "dma_device_type": 2 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "dma_device_id": "system", 00:28:25.800 "dma_device_type": 1 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.800 "dma_device_type": 2 00:28:25.800 } 00:28:25.800 ], 00:28:25.800 "driver_specific": { 00:28:25.800 "raid": { 00:28:25.800 "uuid": "0a8fc784-efb9-4234-b9fb-652c089a7ba5", 00:28:25.800 "strip_size_kb": 0, 00:28:25.800 "state": "online", 00:28:25.800 "raid_level": "raid1", 00:28:25.800 "superblock": true, 00:28:25.800 "num_base_bdevs": 4, 00:28:25.800 "num_base_bdevs_discovered": 4, 00:28:25.800 "num_base_bdevs_operational": 4, 00:28:25.800 "base_bdevs_list": [ 00:28:25.800 { 00:28:25.800 "name": "BaseBdev1", 00:28:25.800 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:25.800 "is_configured": true, 00:28:25.800 "data_offset": 2048, 00:28:25.800 "data_size": 63488 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "name": "BaseBdev2", 00:28:25.800 "uuid": "57925c43-1077-4fb0-9f2f-f1ed10514dbd", 00:28:25.800 "is_configured": true, 00:28:25.800 "data_offset": 2048, 00:28:25.800 "data_size": 63488 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "name": "BaseBdev3", 00:28:25.800 "uuid": "49034d23-d844-4f26-b1f6-89ce28ba16b9", 00:28:25.800 "is_configured": true, 00:28:25.800 "data_offset": 2048, 00:28:25.800 "data_size": 63488 00:28:25.800 }, 00:28:25.800 { 00:28:25.800 "name": "BaseBdev4", 00:28:25.800 "uuid": "5f380976-7fd4-4755-8aee-485108fca646", 00:28:25.800 "is_configured": true, 00:28:25.800 "data_offset": 2048, 00:28:25.800 "data_size": 63488 00:28:25.800 } 00:28:25.800 ] 00:28:25.800 } 00:28:25.800 } 00:28:25.800 }' 00:28:25.800 08:55:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:26.059 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:26.059 BaseBdev2 00:28:26.059 BaseBdev3 00:28:26.059 BaseBdev4' 00:28:26.059 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:26.059 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:26.059 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:26.059 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:26.059 "name": "BaseBdev1", 00:28:26.059 "aliases": [ 00:28:26.059 "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f" 00:28:26.059 ], 00:28:26.059 "product_name": "Malloc disk", 00:28:26.059 "block_size": 512, 00:28:26.059 "num_blocks": 65536, 00:28:26.059 "uuid": "b4e4acfb-6e6c-4dbd-8f70-98cfcd8f1c8f", 00:28:26.059 "assigned_rate_limits": { 00:28:26.059 "rw_ios_per_sec": 0, 00:28:26.059 "rw_mbytes_per_sec": 0, 00:28:26.059 "r_mbytes_per_sec": 0, 00:28:26.059 "w_mbytes_per_sec": 0 00:28:26.060 }, 00:28:26.060 "claimed": true, 00:28:26.060 "claim_type": "exclusive_write", 00:28:26.060 "zoned": false, 00:28:26.060 "supported_io_types": { 00:28:26.060 "read": true, 00:28:26.060 "write": true, 00:28:26.060 "unmap": true, 00:28:26.060 "flush": true, 00:28:26.060 "reset": true, 00:28:26.060 "nvme_admin": false, 00:28:26.060 "nvme_io": false, 00:28:26.060 "nvme_io_md": false, 00:28:26.060 "write_zeroes": true, 00:28:26.060 "zcopy": true, 00:28:26.060 "get_zone_info": false, 00:28:26.060 "zone_management": false, 00:28:26.060 "zone_append": false, 00:28:26.060 "compare": false, 00:28:26.060 "compare_and_write": false, 00:28:26.060 "abort": true, 00:28:26.060 "seek_hole": false, 00:28:26.060 "seek_data": false, 00:28:26.060 "copy": true, 00:28:26.060 "nvme_iov_md": false 00:28:26.060 }, 00:28:26.060 "memory_domains": [ 00:28:26.060 { 00:28:26.060 "dma_device_id": "system", 00:28:26.060 "dma_device_type": 1 00:28:26.060 }, 00:28:26.060 { 00:28:26.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.060 "dma_device_type": 2 00:28:26.060 } 00:28:26.060 ], 00:28:26.060 "driver_specific": {} 00:28:26.060 }' 00:28:26.060 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:26.319 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:26.319 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:26.319 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:26.319 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:26.319 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:26.319 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:26.578 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:26.837 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:26.837 "name": "BaseBdev2", 00:28:26.837 "aliases": [ 00:28:26.837 "57925c43-1077-4fb0-9f2f-f1ed10514dbd" 00:28:26.837 ], 00:28:26.837 "product_name": "Malloc disk", 00:28:26.837 "block_size": 512, 00:28:26.837 "num_blocks": 65536, 00:28:26.837 "uuid": "57925c43-1077-4fb0-9f2f-f1ed10514dbd", 00:28:26.837 "assigned_rate_limits": { 00:28:26.837 "rw_ios_per_sec": 0, 00:28:26.837 "rw_mbytes_per_sec": 0, 00:28:26.837 "r_mbytes_per_sec": 0, 00:28:26.837 "w_mbytes_per_sec": 0 00:28:26.837 }, 00:28:26.837 "claimed": true, 00:28:26.837 "claim_type": "exclusive_write", 00:28:26.837 "zoned": false, 00:28:26.837 "supported_io_types": { 00:28:26.837 "read": true, 00:28:26.837 "write": true, 00:28:26.837 "unmap": true, 00:28:26.837 "flush": true, 00:28:26.837 "reset": true, 00:28:26.837 "nvme_admin": false, 00:28:26.837 "nvme_io": false, 00:28:26.837 "nvme_io_md": false, 00:28:26.837 "write_zeroes": true, 00:28:26.837 "zcopy": true, 00:28:26.837 "get_zone_info": false, 00:28:26.837 "zone_management": false, 00:28:26.837 "zone_append": false, 00:28:26.837 "compare": false, 00:28:26.837 "compare_and_write": false, 00:28:26.837 "abort": true, 00:28:26.837 "seek_hole": false, 00:28:26.837 "seek_data": false, 00:28:26.837 "copy": true, 00:28:26.837 "nvme_iov_md": false 00:28:26.837 }, 00:28:26.837 "memory_domains": [ 00:28:26.837 { 00:28:26.837 "dma_device_id": "system", 00:28:26.837 "dma_device_type": 1 00:28:26.837 }, 00:28:26.837 { 00:28:26.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.837 "dma_device_type": 2 00:28:26.837 } 00:28:26.837 ], 00:28:26.837 "driver_specific": {} 00:28:26.837 }' 00:28:26.837 08:55:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:26.837 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:27.096 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:27.096 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.096 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.096 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:27.096 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:27.096 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:27.354 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:27.354 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:27.354 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:27.354 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:27.354 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:27.354 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:27.354 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:27.612 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:27.612 "name": "BaseBdev3", 00:28:27.612 "aliases": [ 00:28:27.612 "49034d23-d844-4f26-b1f6-89ce28ba16b9" 00:28:27.612 ], 00:28:27.612 "product_name": "Malloc disk", 00:28:27.612 "block_size": 512, 00:28:27.612 "num_blocks": 65536, 00:28:27.612 "uuid": "49034d23-d844-4f26-b1f6-89ce28ba16b9", 00:28:27.612 "assigned_rate_limits": { 00:28:27.612 "rw_ios_per_sec": 0, 00:28:27.612 "rw_mbytes_per_sec": 0, 00:28:27.612 "r_mbytes_per_sec": 0, 00:28:27.612 "w_mbytes_per_sec": 0 00:28:27.612 }, 00:28:27.612 "claimed": true, 00:28:27.612 "claim_type": "exclusive_write", 00:28:27.612 "zoned": false, 00:28:27.612 "supported_io_types": { 00:28:27.612 "read": true, 00:28:27.612 "write": true, 00:28:27.612 "unmap": true, 00:28:27.612 "flush": true, 00:28:27.612 "reset": true, 00:28:27.612 "nvme_admin": false, 00:28:27.612 "nvme_io": false, 00:28:27.612 "nvme_io_md": false, 00:28:27.612 "write_zeroes": true, 00:28:27.612 "zcopy": true, 00:28:27.612 "get_zone_info": false, 00:28:27.612 "zone_management": false, 00:28:27.612 "zone_append": false, 00:28:27.612 "compare": false, 00:28:27.612 "compare_and_write": false, 00:28:27.612 "abort": true, 00:28:27.612 "seek_hole": false, 00:28:27.612 "seek_data": false, 00:28:27.612 "copy": true, 00:28:27.612 "nvme_iov_md": false 00:28:27.612 }, 00:28:27.612 "memory_domains": [ 00:28:27.612 { 00:28:27.612 "dma_device_id": "system", 00:28:27.612 "dma_device_type": 1 00:28:27.612 }, 00:28:27.612 { 00:28:27.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.612 "dma_device_type": 2 00:28:27.612 } 00:28:27.612 ], 00:28:27.612 "driver_specific": {} 00:28:27.612 }' 00:28:27.612 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:27.612 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:27.871 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:27.871 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.871 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.871 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:27.871 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:27.871 08:55:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:27.871 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:27.871 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.130 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.130 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:28.130 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:28.130 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:28.130 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:28.389 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:28.389 "name": "BaseBdev4", 00:28:28.389 "aliases": [ 00:28:28.389 "5f380976-7fd4-4755-8aee-485108fca646" 00:28:28.389 ], 00:28:28.389 "product_name": "Malloc disk", 00:28:28.389 "block_size": 512, 00:28:28.389 "num_blocks": 65536, 00:28:28.389 "uuid": "5f380976-7fd4-4755-8aee-485108fca646", 00:28:28.389 "assigned_rate_limits": { 00:28:28.389 "rw_ios_per_sec": 0, 00:28:28.389 "rw_mbytes_per_sec": 0, 00:28:28.389 "r_mbytes_per_sec": 0, 00:28:28.389 "w_mbytes_per_sec": 0 00:28:28.389 }, 00:28:28.389 "claimed": true, 00:28:28.389 "claim_type": "exclusive_write", 00:28:28.389 "zoned": false, 00:28:28.389 "supported_io_types": { 00:28:28.389 "read": true, 00:28:28.389 "write": true, 00:28:28.389 "unmap": true, 00:28:28.389 "flush": true, 00:28:28.389 "reset": true, 00:28:28.389 "nvme_admin": false, 00:28:28.389 "nvme_io": false, 00:28:28.389 "nvme_io_md": false, 00:28:28.389 "write_zeroes": true, 00:28:28.389 "zcopy": true, 00:28:28.389 "get_zone_info": false, 00:28:28.389 "zone_management": false, 00:28:28.389 "zone_append": false, 00:28:28.389 "compare": false, 00:28:28.389 "compare_and_write": false, 00:28:28.389 "abort": true, 00:28:28.389 "seek_hole": false, 00:28:28.389 "seek_data": false, 00:28:28.389 "copy": true, 00:28:28.389 "nvme_iov_md": false 00:28:28.389 }, 00:28:28.389 "memory_domains": [ 00:28:28.389 { 00:28:28.389 "dma_device_id": "system", 00:28:28.389 "dma_device_type": 1 00:28:28.389 }, 00:28:28.389 { 00:28:28.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:28.389 "dma_device_type": 2 00:28:28.389 } 00:28:28.389 ], 00:28:28.389 "driver_specific": {} 00:28:28.389 }' 00:28:28.389 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:28.389 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:28.389 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:28.389 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:28.649 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:28.649 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:28.649 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:28.649 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:28.649 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:28.649 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.649 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.908 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:28.908 08:55:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:29.167 [2024-07-12 08:55:04.147113] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.167 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:29.426 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.426 "name": "Existed_Raid", 00:28:29.426 "uuid": "0a8fc784-efb9-4234-b9fb-652c089a7ba5", 00:28:29.426 "strip_size_kb": 0, 00:28:29.426 "state": "online", 00:28:29.426 "raid_level": "raid1", 00:28:29.426 "superblock": true, 00:28:29.426 "num_base_bdevs": 4, 00:28:29.426 "num_base_bdevs_discovered": 3, 00:28:29.426 "num_base_bdevs_operational": 3, 00:28:29.426 "base_bdevs_list": [ 00:28:29.426 { 00:28:29.426 "name": null, 00:28:29.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.426 "is_configured": false, 00:28:29.426 "data_offset": 2048, 00:28:29.426 "data_size": 63488 00:28:29.426 }, 00:28:29.426 { 00:28:29.426 "name": "BaseBdev2", 00:28:29.426 "uuid": "57925c43-1077-4fb0-9f2f-f1ed10514dbd", 00:28:29.426 "is_configured": true, 00:28:29.426 "data_offset": 2048, 00:28:29.426 "data_size": 63488 00:28:29.426 }, 00:28:29.426 { 00:28:29.426 "name": "BaseBdev3", 00:28:29.426 "uuid": "49034d23-d844-4f26-b1f6-89ce28ba16b9", 00:28:29.426 "is_configured": true, 00:28:29.426 "data_offset": 2048, 00:28:29.426 "data_size": 63488 00:28:29.426 }, 00:28:29.426 { 00:28:29.426 "name": "BaseBdev4", 00:28:29.426 "uuid": "5f380976-7fd4-4755-8aee-485108fca646", 00:28:29.426 "is_configured": true, 00:28:29.426 "data_offset": 2048, 00:28:29.426 "data_size": 63488 00:28:29.426 } 00:28:29.426 ] 00:28:29.426 }' 00:28:29.426 08:55:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.426 08:55:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:30.362 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:30.362 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:30.362 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.362 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:30.362 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:30.362 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:30.362 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:30.620 [2024-07-12 08:55:05.700749] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:30.620 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:30.620 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:30.620 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.620 08:55:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:30.879 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:30.879 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:30.879 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:31.138 [2024-07-12 08:55:06.321027] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:31.397 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:31.397 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:31.397 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.397 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:31.656 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:31.656 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:31.656 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:31.914 [2024-07-12 08:55:06.880742] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:31.914 [2024-07-12 08:55:06.881131] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:31.914 [2024-07-12 08:55:06.964589] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:31.915 [2024-07-12 08:55:06.964906] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:31.915 [2024-07-12 08:55:06.965010] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:28:31.915 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:31.915 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:31.915 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.915 08:55:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:32.173 08:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:32.173 08:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:32.173 08:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:28:32.173 08:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:32.173 08:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:32.174 08:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:32.433 BaseBdev2 00:28:32.433 08:55:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:32.433 08:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:28:32.433 08:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:32.433 08:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:32.433 08:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:32.433 08:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:32.433 08:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:32.692 08:55:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:32.950 [ 00:28:32.951 { 00:28:32.951 "name": "BaseBdev2", 00:28:32.951 "aliases": [ 00:28:32.951 "ce30fa2e-f38a-4077-b7f3-48ef12c434a8" 00:28:32.951 ], 00:28:32.951 "product_name": "Malloc disk", 00:28:32.951 "block_size": 512, 00:28:32.951 "num_blocks": 65536, 00:28:32.951 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:32.951 "assigned_rate_limits": { 00:28:32.951 "rw_ios_per_sec": 0, 00:28:32.951 "rw_mbytes_per_sec": 0, 00:28:32.951 "r_mbytes_per_sec": 0, 00:28:32.951 "w_mbytes_per_sec": 0 00:28:32.951 }, 00:28:32.951 "claimed": false, 00:28:32.951 "zoned": false, 00:28:32.951 "supported_io_types": { 00:28:32.951 "read": true, 00:28:32.951 "write": true, 00:28:32.951 "unmap": true, 00:28:32.951 "flush": true, 00:28:32.951 "reset": true, 00:28:32.951 "nvme_admin": false, 00:28:32.951 "nvme_io": false, 00:28:32.951 "nvme_io_md": false, 00:28:32.951 "write_zeroes": true, 00:28:32.951 "zcopy": true, 00:28:32.951 "get_zone_info": false, 00:28:32.951 "zone_management": false, 00:28:32.951 "zone_append": false, 00:28:32.951 "compare": false, 00:28:32.951 "compare_and_write": false, 00:28:32.951 "abort": true, 00:28:32.951 "seek_hole": false, 00:28:32.951 "seek_data": false, 00:28:32.951 "copy": true, 00:28:32.951 "nvme_iov_md": false 00:28:32.951 }, 00:28:32.951 "memory_domains": [ 00:28:32.951 { 00:28:32.951 "dma_device_id": "system", 00:28:32.951 "dma_device_type": 1 00:28:32.951 }, 00:28:32.951 { 00:28:32.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.951 "dma_device_type": 2 00:28:32.951 } 00:28:32.951 ], 00:28:32.951 "driver_specific": {} 00:28:32.951 } 00:28:32.951 ] 00:28:32.951 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:32.951 08:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:32.951 08:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:32.951 08:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:33.210 BaseBdev3 00:28:33.210 08:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:33.210 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:28:33.210 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:33.210 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:33.210 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:33.210 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:33.210 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:33.468 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:33.726 [ 00:28:33.726 { 00:28:33.726 "name": "BaseBdev3", 00:28:33.726 "aliases": [ 00:28:33.726 "c69b3613-3bba-4f91-9c2b-d0eaabc5191f" 00:28:33.726 ], 00:28:33.726 "product_name": "Malloc disk", 00:28:33.726 "block_size": 512, 00:28:33.726 "num_blocks": 65536, 00:28:33.726 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:33.726 "assigned_rate_limits": { 00:28:33.726 "rw_ios_per_sec": 0, 00:28:33.726 "rw_mbytes_per_sec": 0, 00:28:33.726 "r_mbytes_per_sec": 0, 00:28:33.726 "w_mbytes_per_sec": 0 00:28:33.726 }, 00:28:33.726 "claimed": false, 00:28:33.726 "zoned": false, 00:28:33.726 "supported_io_types": { 00:28:33.726 "read": true, 00:28:33.726 "write": true, 00:28:33.726 "unmap": true, 00:28:33.726 "flush": true, 00:28:33.726 "reset": true, 00:28:33.726 "nvme_admin": false, 00:28:33.726 "nvme_io": false, 00:28:33.726 "nvme_io_md": false, 00:28:33.726 "write_zeroes": true, 00:28:33.726 "zcopy": true, 00:28:33.726 "get_zone_info": false, 00:28:33.726 "zone_management": false, 00:28:33.726 "zone_append": false, 00:28:33.726 "compare": false, 00:28:33.726 "compare_and_write": false, 00:28:33.726 "abort": true, 00:28:33.726 "seek_hole": false, 00:28:33.726 "seek_data": false, 00:28:33.726 "copy": true, 00:28:33.726 "nvme_iov_md": false 00:28:33.726 }, 00:28:33.726 "memory_domains": [ 00:28:33.726 { 00:28:33.726 "dma_device_id": "system", 00:28:33.726 "dma_device_type": 1 00:28:33.726 }, 00:28:33.726 { 00:28:33.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.726 "dma_device_type": 2 00:28:33.726 } 00:28:33.726 ], 00:28:33.726 "driver_specific": {} 00:28:33.727 } 00:28:33.727 ] 00:28:33.727 08:55:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:33.727 08:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:33.727 08:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:33.727 08:55:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:33.985 BaseBdev4 00:28:33.985 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:28:33.985 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:28:33.985 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:33.985 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:33.985 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:33.985 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:33.985 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:34.244 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:34.502 [ 00:28:34.502 { 00:28:34.502 "name": "BaseBdev4", 00:28:34.502 "aliases": [ 00:28:34.502 "3a7d5936-f185-4f46-a19f-4b274191a935" 00:28:34.502 ], 00:28:34.502 "product_name": "Malloc disk", 00:28:34.502 "block_size": 512, 00:28:34.502 "num_blocks": 65536, 00:28:34.502 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:34.502 "assigned_rate_limits": { 00:28:34.502 "rw_ios_per_sec": 0, 00:28:34.502 "rw_mbytes_per_sec": 0, 00:28:34.502 "r_mbytes_per_sec": 0, 00:28:34.502 "w_mbytes_per_sec": 0 00:28:34.502 }, 00:28:34.502 "claimed": false, 00:28:34.502 "zoned": false, 00:28:34.502 "supported_io_types": { 00:28:34.502 "read": true, 00:28:34.502 "write": true, 00:28:34.502 "unmap": true, 00:28:34.502 "flush": true, 00:28:34.502 "reset": true, 00:28:34.502 "nvme_admin": false, 00:28:34.502 "nvme_io": false, 00:28:34.502 "nvme_io_md": false, 00:28:34.502 "write_zeroes": true, 00:28:34.502 "zcopy": true, 00:28:34.502 "get_zone_info": false, 00:28:34.502 "zone_management": false, 00:28:34.502 "zone_append": false, 00:28:34.502 "compare": false, 00:28:34.502 "compare_and_write": false, 00:28:34.502 "abort": true, 00:28:34.502 "seek_hole": false, 00:28:34.502 "seek_data": false, 00:28:34.502 "copy": true, 00:28:34.502 "nvme_iov_md": false 00:28:34.502 }, 00:28:34.502 "memory_domains": [ 00:28:34.502 { 00:28:34.502 "dma_device_id": "system", 00:28:34.502 "dma_device_type": 1 00:28:34.502 }, 00:28:34.502 { 00:28:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:34.502 "dma_device_type": 2 00:28:34.502 } 00:28:34.502 ], 00:28:34.502 "driver_specific": {} 00:28:34.502 } 00:28:34.502 ] 00:28:34.502 08:55:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:34.502 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:34.502 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:34.502 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:34.761 [2024-07-12 08:55:09.776992] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:34.761 [2024-07-12 08:55:09.777292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:34.761 [2024-07-12 08:55:09.777446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:34.761 [2024-07-12 08:55:09.779544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:34.761 [2024-07-12 08:55:09.779730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.761 08:55:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:35.019 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.019 "name": "Existed_Raid", 00:28:35.019 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:35.019 "strip_size_kb": 0, 00:28:35.019 "state": "configuring", 00:28:35.019 "raid_level": "raid1", 00:28:35.019 "superblock": true, 00:28:35.019 "num_base_bdevs": 4, 00:28:35.019 "num_base_bdevs_discovered": 3, 00:28:35.019 "num_base_bdevs_operational": 4, 00:28:35.019 "base_bdevs_list": [ 00:28:35.019 { 00:28:35.019 "name": "BaseBdev1", 00:28:35.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.019 "is_configured": false, 00:28:35.019 "data_offset": 0, 00:28:35.019 "data_size": 0 00:28:35.019 }, 00:28:35.019 { 00:28:35.019 "name": "BaseBdev2", 00:28:35.019 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:35.019 "is_configured": true, 00:28:35.019 "data_offset": 2048, 00:28:35.019 "data_size": 63488 00:28:35.019 }, 00:28:35.019 { 00:28:35.019 "name": "BaseBdev3", 00:28:35.019 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:35.019 "is_configured": true, 00:28:35.019 "data_offset": 2048, 00:28:35.019 "data_size": 63488 00:28:35.019 }, 00:28:35.019 { 00:28:35.019 "name": "BaseBdev4", 00:28:35.019 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:35.019 "is_configured": true, 00:28:35.019 "data_offset": 2048, 00:28:35.019 "data_size": 63488 00:28:35.019 } 00:28:35.019 ] 00:28:35.019 }' 00:28:35.019 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.019 08:55:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:35.586 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:35.844 [2024-07-12 08:55:10.961229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.844 08:55:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:36.103 08:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:36.103 "name": "Existed_Raid", 00:28:36.103 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:36.103 "strip_size_kb": 0, 00:28:36.103 "state": "configuring", 00:28:36.103 "raid_level": "raid1", 00:28:36.103 "superblock": true, 00:28:36.103 "num_base_bdevs": 4, 00:28:36.103 "num_base_bdevs_discovered": 2, 00:28:36.103 "num_base_bdevs_operational": 4, 00:28:36.103 "base_bdevs_list": [ 00:28:36.103 { 00:28:36.103 "name": "BaseBdev1", 00:28:36.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.103 "is_configured": false, 00:28:36.103 "data_offset": 0, 00:28:36.103 "data_size": 0 00:28:36.103 }, 00:28:36.103 { 00:28:36.103 "name": null, 00:28:36.103 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:36.103 "is_configured": false, 00:28:36.103 "data_offset": 2048, 00:28:36.103 "data_size": 63488 00:28:36.103 }, 00:28:36.103 { 00:28:36.103 "name": "BaseBdev3", 00:28:36.103 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:36.103 "is_configured": true, 00:28:36.103 "data_offset": 2048, 00:28:36.103 "data_size": 63488 00:28:36.103 }, 00:28:36.103 { 00:28:36.103 "name": "BaseBdev4", 00:28:36.103 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:36.103 "is_configured": true, 00:28:36.103 "data_offset": 2048, 00:28:36.103 "data_size": 63488 00:28:36.103 } 00:28:36.103 ] 00:28:36.103 }' 00:28:36.103 08:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:36.103 08:55:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:37.040 08:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.040 08:55:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:37.040 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:37.040 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:37.299 [2024-07-12 08:55:12.395903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:37.299 BaseBdev1 00:28:37.299 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:37.299 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:28:37.299 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:37.299 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:37.299 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:37.299 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:37.299 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:37.557 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:37.816 [ 00:28:37.816 { 00:28:37.816 "name": "BaseBdev1", 00:28:37.816 "aliases": [ 00:28:37.816 "05bcf5d8-8e62-4c1b-82ec-560748cdb9de" 00:28:37.816 ], 00:28:37.816 "product_name": "Malloc disk", 00:28:37.816 "block_size": 512, 00:28:37.816 "num_blocks": 65536, 00:28:37.816 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:37.816 "assigned_rate_limits": { 00:28:37.816 "rw_ios_per_sec": 0, 00:28:37.816 "rw_mbytes_per_sec": 0, 00:28:37.816 "r_mbytes_per_sec": 0, 00:28:37.816 "w_mbytes_per_sec": 0 00:28:37.816 }, 00:28:37.816 "claimed": true, 00:28:37.816 "claim_type": "exclusive_write", 00:28:37.816 "zoned": false, 00:28:37.816 "supported_io_types": { 00:28:37.816 "read": true, 00:28:37.816 "write": true, 00:28:37.816 "unmap": true, 00:28:37.816 "flush": true, 00:28:37.816 "reset": true, 00:28:37.816 "nvme_admin": false, 00:28:37.816 "nvme_io": false, 00:28:37.816 "nvme_io_md": false, 00:28:37.816 "write_zeroes": true, 00:28:37.816 "zcopy": true, 00:28:37.816 "get_zone_info": false, 00:28:37.816 "zone_management": false, 00:28:37.816 "zone_append": false, 00:28:37.816 "compare": false, 00:28:37.816 "compare_and_write": false, 00:28:37.816 "abort": true, 00:28:37.816 "seek_hole": false, 00:28:37.816 "seek_data": false, 00:28:37.816 "copy": true, 00:28:37.816 "nvme_iov_md": false 00:28:37.816 }, 00:28:37.816 "memory_domains": [ 00:28:37.816 { 00:28:37.816 "dma_device_id": "system", 00:28:37.816 "dma_device_type": 1 00:28:37.816 }, 00:28:37.816 { 00:28:37.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:37.816 "dma_device_type": 2 00:28:37.816 } 00:28:37.816 ], 00:28:37.816 "driver_specific": {} 00:28:37.816 } 00:28:37.816 ] 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.816 08:55:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:38.075 08:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.075 "name": "Existed_Raid", 00:28:38.075 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:38.075 "strip_size_kb": 0, 00:28:38.075 "state": "configuring", 00:28:38.075 "raid_level": "raid1", 00:28:38.075 "superblock": true, 00:28:38.075 "num_base_bdevs": 4, 00:28:38.075 "num_base_bdevs_discovered": 3, 00:28:38.075 "num_base_bdevs_operational": 4, 00:28:38.075 "base_bdevs_list": [ 00:28:38.075 { 00:28:38.075 "name": "BaseBdev1", 00:28:38.075 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:38.075 "is_configured": true, 00:28:38.075 "data_offset": 2048, 00:28:38.075 "data_size": 63488 00:28:38.075 }, 00:28:38.075 { 00:28:38.075 "name": null, 00:28:38.075 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:38.075 "is_configured": false, 00:28:38.075 "data_offset": 2048, 00:28:38.075 "data_size": 63488 00:28:38.075 }, 00:28:38.075 { 00:28:38.075 "name": "BaseBdev3", 00:28:38.075 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:38.075 "is_configured": true, 00:28:38.075 "data_offset": 2048, 00:28:38.075 "data_size": 63488 00:28:38.075 }, 00:28:38.075 { 00:28:38.075 "name": "BaseBdev4", 00:28:38.075 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:38.075 "is_configured": true, 00:28:38.075 "data_offset": 2048, 00:28:38.075 "data_size": 63488 00:28:38.075 } 00:28:38.075 ] 00:28:38.075 }' 00:28:38.075 08:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.075 08:55:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.010 08:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.010 08:55:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:39.010 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:39.010 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:39.268 [2024-07-12 08:55:14.456543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.572 "name": "Existed_Raid", 00:28:39.572 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:39.572 "strip_size_kb": 0, 00:28:39.572 "state": "configuring", 00:28:39.572 "raid_level": "raid1", 00:28:39.572 "superblock": true, 00:28:39.572 "num_base_bdevs": 4, 00:28:39.572 "num_base_bdevs_discovered": 2, 00:28:39.572 "num_base_bdevs_operational": 4, 00:28:39.572 "base_bdevs_list": [ 00:28:39.572 { 00:28:39.572 "name": "BaseBdev1", 00:28:39.572 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:39.572 "is_configured": true, 00:28:39.572 "data_offset": 2048, 00:28:39.572 "data_size": 63488 00:28:39.572 }, 00:28:39.572 { 00:28:39.572 "name": null, 00:28:39.572 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:39.572 "is_configured": false, 00:28:39.572 "data_offset": 2048, 00:28:39.572 "data_size": 63488 00:28:39.572 }, 00:28:39.572 { 00:28:39.572 "name": null, 00:28:39.572 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:39.572 "is_configured": false, 00:28:39.572 "data_offset": 2048, 00:28:39.572 "data_size": 63488 00:28:39.572 }, 00:28:39.572 { 00:28:39.572 "name": "BaseBdev4", 00:28:39.572 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:39.572 "is_configured": true, 00:28:39.572 "data_offset": 2048, 00:28:39.572 "data_size": 63488 00:28:39.572 } 00:28:39.572 ] 00:28:39.572 }' 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.572 08:55:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:40.526 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.526 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:40.526 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:40.526 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:40.785 [2024-07-12 08:55:15.928947] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.785 08:55:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:41.044 08:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:41.044 "name": "Existed_Raid", 00:28:41.044 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:41.044 "strip_size_kb": 0, 00:28:41.044 "state": "configuring", 00:28:41.044 "raid_level": "raid1", 00:28:41.044 "superblock": true, 00:28:41.044 "num_base_bdevs": 4, 00:28:41.044 "num_base_bdevs_discovered": 3, 00:28:41.044 "num_base_bdevs_operational": 4, 00:28:41.044 "base_bdevs_list": [ 00:28:41.044 { 00:28:41.044 "name": "BaseBdev1", 00:28:41.044 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:41.044 "is_configured": true, 00:28:41.044 "data_offset": 2048, 00:28:41.044 "data_size": 63488 00:28:41.044 }, 00:28:41.044 { 00:28:41.044 "name": null, 00:28:41.044 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:41.044 "is_configured": false, 00:28:41.044 "data_offset": 2048, 00:28:41.044 "data_size": 63488 00:28:41.044 }, 00:28:41.044 { 00:28:41.044 "name": "BaseBdev3", 00:28:41.044 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:41.044 "is_configured": true, 00:28:41.044 "data_offset": 2048, 00:28:41.044 "data_size": 63488 00:28:41.044 }, 00:28:41.044 { 00:28:41.044 "name": "BaseBdev4", 00:28:41.044 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:41.044 "is_configured": true, 00:28:41.044 "data_offset": 2048, 00:28:41.044 "data_size": 63488 00:28:41.044 } 00:28:41.044 ] 00:28:41.044 }' 00:28:41.044 08:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:41.044 08:55:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.980 08:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.980 08:55:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:41.980 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:41.980 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:42.238 [2024-07-12 08:55:17.369335] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.496 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:42.754 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.754 "name": "Existed_Raid", 00:28:42.754 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:42.754 "strip_size_kb": 0, 00:28:42.754 "state": "configuring", 00:28:42.754 "raid_level": "raid1", 00:28:42.754 "superblock": true, 00:28:42.754 "num_base_bdevs": 4, 00:28:42.754 "num_base_bdevs_discovered": 2, 00:28:42.754 "num_base_bdevs_operational": 4, 00:28:42.754 "base_bdevs_list": [ 00:28:42.754 { 00:28:42.754 "name": null, 00:28:42.754 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:42.754 "is_configured": false, 00:28:42.754 "data_offset": 2048, 00:28:42.754 "data_size": 63488 00:28:42.754 }, 00:28:42.754 { 00:28:42.754 "name": null, 00:28:42.754 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:42.754 "is_configured": false, 00:28:42.754 "data_offset": 2048, 00:28:42.754 "data_size": 63488 00:28:42.754 }, 00:28:42.754 { 00:28:42.754 "name": "BaseBdev3", 00:28:42.754 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:42.754 "is_configured": true, 00:28:42.754 "data_offset": 2048, 00:28:42.754 "data_size": 63488 00:28:42.754 }, 00:28:42.754 { 00:28:42.754 "name": "BaseBdev4", 00:28:42.754 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:42.754 "is_configured": true, 00:28:42.754 "data_offset": 2048, 00:28:42.754 "data_size": 63488 00:28:42.754 } 00:28:42.754 ] 00:28:42.754 }' 00:28:42.754 08:55:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.754 08:55:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:43.319 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.319 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:43.577 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:43.577 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:43.834 [2024-07-12 08:55:18.873212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.834 08:55:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:44.092 08:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:44.092 "name": "Existed_Raid", 00:28:44.092 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:44.092 "strip_size_kb": 0, 00:28:44.092 "state": "configuring", 00:28:44.092 "raid_level": "raid1", 00:28:44.092 "superblock": true, 00:28:44.092 "num_base_bdevs": 4, 00:28:44.092 "num_base_bdevs_discovered": 3, 00:28:44.092 "num_base_bdevs_operational": 4, 00:28:44.092 "base_bdevs_list": [ 00:28:44.092 { 00:28:44.092 "name": null, 00:28:44.092 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:44.092 "is_configured": false, 00:28:44.092 "data_offset": 2048, 00:28:44.092 "data_size": 63488 00:28:44.092 }, 00:28:44.092 { 00:28:44.092 "name": "BaseBdev2", 00:28:44.092 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:44.092 "is_configured": true, 00:28:44.092 "data_offset": 2048, 00:28:44.092 "data_size": 63488 00:28:44.092 }, 00:28:44.092 { 00:28:44.092 "name": "BaseBdev3", 00:28:44.092 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:44.092 "is_configured": true, 00:28:44.092 "data_offset": 2048, 00:28:44.092 "data_size": 63488 00:28:44.092 }, 00:28:44.092 { 00:28:44.092 "name": "BaseBdev4", 00:28:44.092 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:44.092 "is_configured": true, 00:28:44.092 "data_offset": 2048, 00:28:44.092 "data_size": 63488 00:28:44.092 } 00:28:44.092 ] 00:28:44.092 }' 00:28:44.092 08:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:44.092 08:55:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.657 08:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.657 08:55:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:44.916 08:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:44.916 08:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.916 08:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:45.174 08:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 05bcf5d8-8e62-4c1b-82ec-560748cdb9de 00:28:45.432 [2024-07-12 08:55:20.550924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:45.432 [2024-07-12 08:55:20.551545] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:45.432 [2024-07-12 08:55:20.551758] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:45.432 NewBaseBdev 00:28:45.432 [2024-07-12 08:55:20.552173] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:45.432 [2024-07-12 08:55:20.552764] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:45.432 [2024-07-12 08:55:20.552966] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:28:45.432 [2024-07-12 08:55:20.553282] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.432 08:55:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:45.432 08:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:28:45.432 08:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:28:45.432 08:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:28:45.432 08:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:28:45.432 08:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:28:45.432 08:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:45.689 08:55:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:45.947 [ 00:28:45.947 { 00:28:45.947 "name": "NewBaseBdev", 00:28:45.947 "aliases": [ 00:28:45.947 "05bcf5d8-8e62-4c1b-82ec-560748cdb9de" 00:28:45.947 ], 00:28:45.947 "product_name": "Malloc disk", 00:28:45.947 "block_size": 512, 00:28:45.947 "num_blocks": 65536, 00:28:45.947 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:45.947 "assigned_rate_limits": { 00:28:45.947 "rw_ios_per_sec": 0, 00:28:45.947 "rw_mbytes_per_sec": 0, 00:28:45.947 "r_mbytes_per_sec": 0, 00:28:45.947 "w_mbytes_per_sec": 0 00:28:45.947 }, 00:28:45.947 "claimed": true, 00:28:45.947 "claim_type": "exclusive_write", 00:28:45.947 "zoned": false, 00:28:45.947 "supported_io_types": { 00:28:45.947 "read": true, 00:28:45.947 "write": true, 00:28:45.947 "unmap": true, 00:28:45.947 "flush": true, 00:28:45.947 "reset": true, 00:28:45.947 "nvme_admin": false, 00:28:45.947 "nvme_io": false, 00:28:45.947 "nvme_io_md": false, 00:28:45.947 "write_zeroes": true, 00:28:45.947 "zcopy": true, 00:28:45.947 "get_zone_info": false, 00:28:45.947 "zone_management": false, 00:28:45.947 "zone_append": false, 00:28:45.947 "compare": false, 00:28:45.947 "compare_and_write": false, 00:28:45.947 "abort": true, 00:28:45.947 "seek_hole": false, 00:28:45.947 "seek_data": false, 00:28:45.947 "copy": true, 00:28:45.947 "nvme_iov_md": false 00:28:45.947 }, 00:28:45.947 "memory_domains": [ 00:28:45.947 { 00:28:45.947 "dma_device_id": "system", 00:28:45.947 "dma_device_type": 1 00:28:45.947 }, 00:28:45.947 { 00:28:45.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.947 "dma_device_type": 2 00:28:45.947 } 00:28:45.947 ], 00:28:45.947 "driver_specific": {} 00:28:45.947 } 00:28:45.947 ] 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.947 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.948 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.948 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:46.206 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:46.206 "name": "Existed_Raid", 00:28:46.206 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:46.206 "strip_size_kb": 0, 00:28:46.206 "state": "online", 00:28:46.206 "raid_level": "raid1", 00:28:46.206 "superblock": true, 00:28:46.206 "num_base_bdevs": 4, 00:28:46.206 "num_base_bdevs_discovered": 4, 00:28:46.206 "num_base_bdevs_operational": 4, 00:28:46.206 "base_bdevs_list": [ 00:28:46.206 { 00:28:46.206 "name": "NewBaseBdev", 00:28:46.206 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:46.206 "is_configured": true, 00:28:46.206 "data_offset": 2048, 00:28:46.206 "data_size": 63488 00:28:46.206 }, 00:28:46.206 { 00:28:46.206 "name": "BaseBdev2", 00:28:46.206 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:46.206 "is_configured": true, 00:28:46.206 "data_offset": 2048, 00:28:46.206 "data_size": 63488 00:28:46.206 }, 00:28:46.206 { 00:28:46.206 "name": "BaseBdev3", 00:28:46.206 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:46.206 "is_configured": true, 00:28:46.206 "data_offset": 2048, 00:28:46.206 "data_size": 63488 00:28:46.206 }, 00:28:46.206 { 00:28:46.206 "name": "BaseBdev4", 00:28:46.206 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:46.206 "is_configured": true, 00:28:46.206 "data_offset": 2048, 00:28:46.206 "data_size": 63488 00:28:46.206 } 00:28:46.206 ] 00:28:46.206 }' 00:28:46.206 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:46.206 08:55:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:47.140 08:55:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:47.140 [2024-07-12 08:55:22.261179] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:47.140 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:47.140 "name": "Existed_Raid", 00:28:47.140 "aliases": [ 00:28:47.140 "bd196e1f-b95e-4db0-ad7c-2afa6291aac9" 00:28:47.140 ], 00:28:47.140 "product_name": "Raid Volume", 00:28:47.140 "block_size": 512, 00:28:47.140 "num_blocks": 63488, 00:28:47.140 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:47.140 "assigned_rate_limits": { 00:28:47.140 "rw_ios_per_sec": 0, 00:28:47.140 "rw_mbytes_per_sec": 0, 00:28:47.140 "r_mbytes_per_sec": 0, 00:28:47.140 "w_mbytes_per_sec": 0 00:28:47.140 }, 00:28:47.140 "claimed": false, 00:28:47.140 "zoned": false, 00:28:47.140 "supported_io_types": { 00:28:47.140 "read": true, 00:28:47.140 "write": true, 00:28:47.140 "unmap": false, 00:28:47.140 "flush": false, 00:28:47.140 "reset": true, 00:28:47.140 "nvme_admin": false, 00:28:47.140 "nvme_io": false, 00:28:47.140 "nvme_io_md": false, 00:28:47.140 "write_zeroes": true, 00:28:47.140 "zcopy": false, 00:28:47.140 "get_zone_info": false, 00:28:47.140 "zone_management": false, 00:28:47.140 "zone_append": false, 00:28:47.140 "compare": false, 00:28:47.140 "compare_and_write": false, 00:28:47.140 "abort": false, 00:28:47.140 "seek_hole": false, 00:28:47.140 "seek_data": false, 00:28:47.140 "copy": false, 00:28:47.140 "nvme_iov_md": false 00:28:47.140 }, 00:28:47.140 "memory_domains": [ 00:28:47.140 { 00:28:47.140 "dma_device_id": "system", 00:28:47.140 "dma_device_type": 1 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.140 "dma_device_type": 2 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "dma_device_id": "system", 00:28:47.140 "dma_device_type": 1 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.140 "dma_device_type": 2 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "dma_device_id": "system", 00:28:47.140 "dma_device_type": 1 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.140 "dma_device_type": 2 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "dma_device_id": "system", 00:28:47.140 "dma_device_type": 1 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.140 "dma_device_type": 2 00:28:47.140 } 00:28:47.140 ], 00:28:47.140 "driver_specific": { 00:28:47.140 "raid": { 00:28:47.140 "uuid": "bd196e1f-b95e-4db0-ad7c-2afa6291aac9", 00:28:47.140 "strip_size_kb": 0, 00:28:47.140 "state": "online", 00:28:47.140 "raid_level": "raid1", 00:28:47.140 "superblock": true, 00:28:47.140 "num_base_bdevs": 4, 00:28:47.140 "num_base_bdevs_discovered": 4, 00:28:47.140 "num_base_bdevs_operational": 4, 00:28:47.140 "base_bdevs_list": [ 00:28:47.140 { 00:28:47.140 "name": "NewBaseBdev", 00:28:47.140 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:47.140 "is_configured": true, 00:28:47.140 "data_offset": 2048, 00:28:47.140 "data_size": 63488 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "name": "BaseBdev2", 00:28:47.140 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:47.140 "is_configured": true, 00:28:47.140 "data_offset": 2048, 00:28:47.140 "data_size": 63488 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "name": "BaseBdev3", 00:28:47.140 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:47.140 "is_configured": true, 00:28:47.140 "data_offset": 2048, 00:28:47.140 "data_size": 63488 00:28:47.140 }, 00:28:47.140 { 00:28:47.140 "name": "BaseBdev4", 00:28:47.140 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:47.140 "is_configured": true, 00:28:47.140 "data_offset": 2048, 00:28:47.140 "data_size": 63488 00:28:47.140 } 00:28:47.140 ] 00:28:47.140 } 00:28:47.140 } 00:28:47.140 }' 00:28:47.140 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:47.398 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:47.398 BaseBdev2 00:28:47.398 BaseBdev3 00:28:47.398 BaseBdev4' 00:28:47.398 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:47.398 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:47.398 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:47.656 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:47.656 "name": "NewBaseBdev", 00:28:47.656 "aliases": [ 00:28:47.656 "05bcf5d8-8e62-4c1b-82ec-560748cdb9de" 00:28:47.656 ], 00:28:47.656 "product_name": "Malloc disk", 00:28:47.656 "block_size": 512, 00:28:47.656 "num_blocks": 65536, 00:28:47.656 "uuid": "05bcf5d8-8e62-4c1b-82ec-560748cdb9de", 00:28:47.656 "assigned_rate_limits": { 00:28:47.656 "rw_ios_per_sec": 0, 00:28:47.656 "rw_mbytes_per_sec": 0, 00:28:47.656 "r_mbytes_per_sec": 0, 00:28:47.656 "w_mbytes_per_sec": 0 00:28:47.656 }, 00:28:47.656 "claimed": true, 00:28:47.656 "claim_type": "exclusive_write", 00:28:47.656 "zoned": false, 00:28:47.656 "supported_io_types": { 00:28:47.656 "read": true, 00:28:47.656 "write": true, 00:28:47.656 "unmap": true, 00:28:47.656 "flush": true, 00:28:47.656 "reset": true, 00:28:47.656 "nvme_admin": false, 00:28:47.656 "nvme_io": false, 00:28:47.656 "nvme_io_md": false, 00:28:47.656 "write_zeroes": true, 00:28:47.656 "zcopy": true, 00:28:47.656 "get_zone_info": false, 00:28:47.656 "zone_management": false, 00:28:47.656 "zone_append": false, 00:28:47.656 "compare": false, 00:28:47.656 "compare_and_write": false, 00:28:47.656 "abort": true, 00:28:47.656 "seek_hole": false, 00:28:47.656 "seek_data": false, 00:28:47.657 "copy": true, 00:28:47.657 "nvme_iov_md": false 00:28:47.657 }, 00:28:47.657 "memory_domains": [ 00:28:47.657 { 00:28:47.657 "dma_device_id": "system", 00:28:47.657 "dma_device_type": 1 00:28:47.657 }, 00:28:47.657 { 00:28:47.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.657 "dma_device_type": 2 00:28:47.657 } 00:28:47.657 ], 00:28:47.657 "driver_specific": {} 00:28:47.657 }' 00:28:47.657 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.657 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.657 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:47.657 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:47.657 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:47.657 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:47.657 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:47.915 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:47.915 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:47.915 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:47.915 08:55:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:47.915 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:47.915 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:47.915 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:47.915 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:48.172 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:48.172 "name": "BaseBdev2", 00:28:48.172 "aliases": [ 00:28:48.172 "ce30fa2e-f38a-4077-b7f3-48ef12c434a8" 00:28:48.172 ], 00:28:48.172 "product_name": "Malloc disk", 00:28:48.172 "block_size": 512, 00:28:48.172 "num_blocks": 65536, 00:28:48.172 "uuid": "ce30fa2e-f38a-4077-b7f3-48ef12c434a8", 00:28:48.172 "assigned_rate_limits": { 00:28:48.172 "rw_ios_per_sec": 0, 00:28:48.172 "rw_mbytes_per_sec": 0, 00:28:48.172 "r_mbytes_per_sec": 0, 00:28:48.172 "w_mbytes_per_sec": 0 00:28:48.172 }, 00:28:48.172 "claimed": true, 00:28:48.172 "claim_type": "exclusive_write", 00:28:48.172 "zoned": false, 00:28:48.172 "supported_io_types": { 00:28:48.172 "read": true, 00:28:48.172 "write": true, 00:28:48.172 "unmap": true, 00:28:48.172 "flush": true, 00:28:48.172 "reset": true, 00:28:48.172 "nvme_admin": false, 00:28:48.172 "nvme_io": false, 00:28:48.172 "nvme_io_md": false, 00:28:48.172 "write_zeroes": true, 00:28:48.172 "zcopy": true, 00:28:48.172 "get_zone_info": false, 00:28:48.172 "zone_management": false, 00:28:48.172 "zone_append": false, 00:28:48.172 "compare": false, 00:28:48.172 "compare_and_write": false, 00:28:48.172 "abort": true, 00:28:48.172 "seek_hole": false, 00:28:48.172 "seek_data": false, 00:28:48.172 "copy": true, 00:28:48.172 "nvme_iov_md": false 00:28:48.172 }, 00:28:48.172 "memory_domains": [ 00:28:48.172 { 00:28:48.172 "dma_device_id": "system", 00:28:48.172 "dma_device_type": 1 00:28:48.172 }, 00:28:48.172 { 00:28:48.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:48.172 "dma_device_type": 2 00:28:48.172 } 00:28:48.172 ], 00:28:48.172 "driver_specific": {} 00:28:48.172 }' 00:28:48.172 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:48.456 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:48.456 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:48.456 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.456 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.456 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:48.456 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.456 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.714 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:48.714 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.714 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.714 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:48.714 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:48.714 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:48.714 08:55:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:48.973 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:48.973 "name": "BaseBdev3", 00:28:48.973 "aliases": [ 00:28:48.973 "c69b3613-3bba-4f91-9c2b-d0eaabc5191f" 00:28:48.973 ], 00:28:48.973 "product_name": "Malloc disk", 00:28:48.973 "block_size": 512, 00:28:48.973 "num_blocks": 65536, 00:28:48.973 "uuid": "c69b3613-3bba-4f91-9c2b-d0eaabc5191f", 00:28:48.973 "assigned_rate_limits": { 00:28:48.973 "rw_ios_per_sec": 0, 00:28:48.973 "rw_mbytes_per_sec": 0, 00:28:48.973 "r_mbytes_per_sec": 0, 00:28:48.973 "w_mbytes_per_sec": 0 00:28:48.973 }, 00:28:48.973 "claimed": true, 00:28:48.973 "claim_type": "exclusive_write", 00:28:48.973 "zoned": false, 00:28:48.973 "supported_io_types": { 00:28:48.973 "read": true, 00:28:48.973 "write": true, 00:28:48.973 "unmap": true, 00:28:48.973 "flush": true, 00:28:48.973 "reset": true, 00:28:48.973 "nvme_admin": false, 00:28:48.973 "nvme_io": false, 00:28:48.973 "nvme_io_md": false, 00:28:48.973 "write_zeroes": true, 00:28:48.973 "zcopy": true, 00:28:48.973 "get_zone_info": false, 00:28:48.973 "zone_management": false, 00:28:48.973 "zone_append": false, 00:28:48.973 "compare": false, 00:28:48.973 "compare_and_write": false, 00:28:48.973 "abort": true, 00:28:48.973 "seek_hole": false, 00:28:48.973 "seek_data": false, 00:28:48.973 "copy": true, 00:28:48.973 "nvme_iov_md": false 00:28:48.973 }, 00:28:48.973 "memory_domains": [ 00:28:48.973 { 00:28:48.973 "dma_device_id": "system", 00:28:48.973 "dma_device_type": 1 00:28:48.973 }, 00:28:48.973 { 00:28:48.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:48.973 "dma_device_type": 2 00:28:48.973 } 00:28:48.973 ], 00:28:48.973 "driver_specific": {} 00:28:48.973 }' 00:28:48.973 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:48.973 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:49.232 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:49.522 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:49.522 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:49.522 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:49.522 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:49.522 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:49.780 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:49.780 "name": "BaseBdev4", 00:28:49.780 "aliases": [ 00:28:49.780 "3a7d5936-f185-4f46-a19f-4b274191a935" 00:28:49.780 ], 00:28:49.780 "product_name": "Malloc disk", 00:28:49.780 "block_size": 512, 00:28:49.780 "num_blocks": 65536, 00:28:49.780 "uuid": "3a7d5936-f185-4f46-a19f-4b274191a935", 00:28:49.780 "assigned_rate_limits": { 00:28:49.780 "rw_ios_per_sec": 0, 00:28:49.780 "rw_mbytes_per_sec": 0, 00:28:49.780 "r_mbytes_per_sec": 0, 00:28:49.780 "w_mbytes_per_sec": 0 00:28:49.780 }, 00:28:49.780 "claimed": true, 00:28:49.780 "claim_type": "exclusive_write", 00:28:49.780 "zoned": false, 00:28:49.780 "supported_io_types": { 00:28:49.780 "read": true, 00:28:49.780 "write": true, 00:28:49.780 "unmap": true, 00:28:49.780 "flush": true, 00:28:49.780 "reset": true, 00:28:49.780 "nvme_admin": false, 00:28:49.780 "nvme_io": false, 00:28:49.780 "nvme_io_md": false, 00:28:49.780 "write_zeroes": true, 00:28:49.780 "zcopy": true, 00:28:49.780 "get_zone_info": false, 00:28:49.780 "zone_management": false, 00:28:49.780 "zone_append": false, 00:28:49.780 "compare": false, 00:28:49.780 "compare_and_write": false, 00:28:49.780 "abort": true, 00:28:49.780 "seek_hole": false, 00:28:49.780 "seek_data": false, 00:28:49.780 "copy": true, 00:28:49.780 "nvme_iov_md": false 00:28:49.780 }, 00:28:49.780 "memory_domains": [ 00:28:49.780 { 00:28:49.780 "dma_device_id": "system", 00:28:49.780 "dma_device_type": 1 00:28:49.780 }, 00:28:49.780 { 00:28:49.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:49.780 "dma_device_type": 2 00:28:49.780 } 00:28:49.780 ], 00:28:49.780 "driver_specific": {} 00:28:49.780 }' 00:28:49.780 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.780 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.780 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:49.780 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.039 08:55:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.039 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:50.039 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.039 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.039 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:50.039 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.039 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.298 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:50.298 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:50.557 [2024-07-12 08:55:25.529936] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:50.557 [2024-07-12 08:55:25.530400] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:50.557 [2024-07-12 08:55:25.530654] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:50.557 [2024-07-12 08:55:25.531141] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:50.557 [2024-07-12 08:55:25.531332] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 143661 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 143661 ']' 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 143661 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143661 00:28:50.557 killing process with pid 143661 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143661' 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 143661 00:28:50.557 08:55:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 143661 00:28:50.557 [2024-07-12 08:55:25.567010] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:50.816 [2024-07-12 08:55:25.877682] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:51.750 ************************************ 00:28:51.750 END TEST raid_state_function_test_sb 00:28:51.750 ************************************ 00:28:51.750 08:55:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:28:51.750 00:28:51.750 real 0m37.236s 00:28:51.750 user 1m9.827s 00:28:51.750 sys 0m4.254s 00:28:51.750 08:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:51.750 08:55:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.009 08:55:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:52.009 08:55:26 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:28:52.009 08:55:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:52.009 08:55:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.009 08:55:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:52.009 ************************************ 00:28:52.009 START TEST raid_superblock_test 00:28:52.009 ************************************ 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=144822 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 144822 /var/tmp/spdk-raid.sock 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 144822 ']' 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:52.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.009 08:55:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:52.009 [2024-07-12 08:55:27.048049] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:28:52.009 [2024-07-12 08:55:27.048559] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144822 ] 00:28:52.268 [2024-07-12 08:55:27.222263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.268 [2024-07-12 08:55:27.455948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.527 [2024-07-12 08:55:27.636577] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:53.095 08:55:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:53.095 malloc1 00:28:53.355 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:53.613 [2024-07-12 08:55:28.553441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:53.613 [2024-07-12 08:55:28.553776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:53.613 [2024-07-12 08:55:28.553926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:28:53.613 [2024-07-12 08:55:28.554088] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:53.613 [2024-07-12 08:55:28.556845] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:53.613 [2024-07-12 08:55:28.557026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:53.613 pt1 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:53.614 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:53.872 malloc2 00:28:53.872 08:55:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:54.130 [2024-07-12 08:55:29.104300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:54.130 [2024-07-12 08:55:29.104746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.130 [2024-07-12 08:55:29.104900] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:28:54.130 [2024-07-12 08:55:29.105026] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.130 [2024-07-12 08:55:29.107540] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.130 [2024-07-12 08:55:29.107717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:54.130 pt2 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:54.130 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:54.389 malloc3 00:28:54.389 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:54.647 [2024-07-12 08:55:29.600642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:54.647 [2024-07-12 08:55:29.601107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.647 [2024-07-12 08:55:29.601261] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:28:54.647 [2024-07-12 08:55:29.601424] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.647 [2024-07-12 08:55:29.603931] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.647 [2024-07-12 08:55:29.604124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:54.647 pt3 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:54.647 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:28:54.907 malloc4 00:28:54.907 08:55:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:54.907 [2024-07-12 08:55:30.082195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:54.907 [2024-07-12 08:55:30.082521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.907 [2024-07-12 08:55:30.082696] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:54.907 [2024-07-12 08:55:30.082824] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.907 [2024-07-12 08:55:30.085412] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.907 [2024-07-12 08:55:30.085602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:54.907 pt4 00:28:54.907 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:28:54.907 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:28:54.907 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:28:55.475 [2024-07-12 08:55:30.362452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:55.475 [2024-07-12 08:55:30.364849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:55.475 [2024-07-12 08:55:30.365091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:55.475 [2024-07-12 08:55:30.365277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:55.476 [2024-07-12 08:55:30.365743] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:28:55.476 [2024-07-12 08:55:30.365919] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:55.476 [2024-07-12 08:55:30.366121] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:55.476 [2024-07-12 08:55:30.366626] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:28:55.476 [2024-07-12 08:55:30.366761] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:28:55.476 [2024-07-12 08:55:30.367087] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:55.476 "name": "raid_bdev1", 00:28:55.476 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:28:55.476 "strip_size_kb": 0, 00:28:55.476 "state": "online", 00:28:55.476 "raid_level": "raid1", 00:28:55.476 "superblock": true, 00:28:55.476 "num_base_bdevs": 4, 00:28:55.476 "num_base_bdevs_discovered": 4, 00:28:55.476 "num_base_bdevs_operational": 4, 00:28:55.476 "base_bdevs_list": [ 00:28:55.476 { 00:28:55.476 "name": "pt1", 00:28:55.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:55.476 "is_configured": true, 00:28:55.476 "data_offset": 2048, 00:28:55.476 "data_size": 63488 00:28:55.476 }, 00:28:55.476 { 00:28:55.476 "name": "pt2", 00:28:55.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:55.476 "is_configured": true, 00:28:55.476 "data_offset": 2048, 00:28:55.476 "data_size": 63488 00:28:55.476 }, 00:28:55.476 { 00:28:55.476 "name": "pt3", 00:28:55.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:55.476 "is_configured": true, 00:28:55.476 "data_offset": 2048, 00:28:55.476 "data_size": 63488 00:28:55.476 }, 00:28:55.476 { 00:28:55.476 "name": "pt4", 00:28:55.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:55.476 "is_configured": true, 00:28:55.476 "data_offset": 2048, 00:28:55.476 "data_size": 63488 00:28:55.476 } 00:28:55.476 ] 00:28:55.476 }' 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:55.476 08:55:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:56.414 [2024-07-12 08:55:31.511587] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:56.414 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:56.414 "name": "raid_bdev1", 00:28:56.414 "aliases": [ 00:28:56.414 "5784f098-bb0b-4f63-850c-f22ec139358b" 00:28:56.414 ], 00:28:56.414 "product_name": "Raid Volume", 00:28:56.414 "block_size": 512, 00:28:56.414 "num_blocks": 63488, 00:28:56.414 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:28:56.414 "assigned_rate_limits": { 00:28:56.414 "rw_ios_per_sec": 0, 00:28:56.414 "rw_mbytes_per_sec": 0, 00:28:56.414 "r_mbytes_per_sec": 0, 00:28:56.414 "w_mbytes_per_sec": 0 00:28:56.414 }, 00:28:56.414 "claimed": false, 00:28:56.414 "zoned": false, 00:28:56.414 "supported_io_types": { 00:28:56.414 "read": true, 00:28:56.414 "write": true, 00:28:56.414 "unmap": false, 00:28:56.414 "flush": false, 00:28:56.414 "reset": true, 00:28:56.414 "nvme_admin": false, 00:28:56.414 "nvme_io": false, 00:28:56.414 "nvme_io_md": false, 00:28:56.414 "write_zeroes": true, 00:28:56.414 "zcopy": false, 00:28:56.414 "get_zone_info": false, 00:28:56.414 "zone_management": false, 00:28:56.414 "zone_append": false, 00:28:56.414 "compare": false, 00:28:56.414 "compare_and_write": false, 00:28:56.414 "abort": false, 00:28:56.414 "seek_hole": false, 00:28:56.414 "seek_data": false, 00:28:56.414 "copy": false, 00:28:56.414 "nvme_iov_md": false 00:28:56.414 }, 00:28:56.414 "memory_domains": [ 00:28:56.414 { 00:28:56.414 "dma_device_id": "system", 00:28:56.414 "dma_device_type": 1 00:28:56.414 }, 00:28:56.414 { 00:28:56.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.414 "dma_device_type": 2 00:28:56.414 }, 00:28:56.414 { 00:28:56.414 "dma_device_id": "system", 00:28:56.414 "dma_device_type": 1 00:28:56.414 }, 00:28:56.414 { 00:28:56.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.414 "dma_device_type": 2 00:28:56.414 }, 00:28:56.414 { 00:28:56.414 "dma_device_id": "system", 00:28:56.414 "dma_device_type": 1 00:28:56.414 }, 00:28:56.414 { 00:28:56.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.414 "dma_device_type": 2 00:28:56.414 }, 00:28:56.414 { 00:28:56.414 "dma_device_id": "system", 00:28:56.414 "dma_device_type": 1 00:28:56.414 }, 00:28:56.414 { 00:28:56.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.414 "dma_device_type": 2 00:28:56.414 } 00:28:56.414 ], 00:28:56.414 "driver_specific": { 00:28:56.414 "raid": { 00:28:56.414 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:28:56.414 "strip_size_kb": 0, 00:28:56.414 "state": "online", 00:28:56.414 "raid_level": "raid1", 00:28:56.414 "superblock": true, 00:28:56.414 "num_base_bdevs": 4, 00:28:56.414 "num_base_bdevs_discovered": 4, 00:28:56.414 "num_base_bdevs_operational": 4, 00:28:56.414 "base_bdevs_list": [ 00:28:56.414 { 00:28:56.414 "name": "pt1", 00:28:56.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:56.415 "is_configured": true, 00:28:56.415 "data_offset": 2048, 00:28:56.415 "data_size": 63488 00:28:56.415 }, 00:28:56.415 { 00:28:56.415 "name": "pt2", 00:28:56.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:56.415 "is_configured": true, 00:28:56.415 "data_offset": 2048, 00:28:56.415 "data_size": 63488 00:28:56.415 }, 00:28:56.415 { 00:28:56.415 "name": "pt3", 00:28:56.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:56.415 "is_configured": true, 00:28:56.415 "data_offset": 2048, 00:28:56.415 "data_size": 63488 00:28:56.415 }, 00:28:56.415 { 00:28:56.415 "name": "pt4", 00:28:56.415 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:56.415 "is_configured": true, 00:28:56.415 "data_offset": 2048, 00:28:56.415 "data_size": 63488 00:28:56.415 } 00:28:56.415 ] 00:28:56.415 } 00:28:56.415 } 00:28:56.415 }' 00:28:56.415 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:56.415 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:56.415 pt2 00:28:56.415 pt3 00:28:56.415 pt4' 00:28:56.415 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:56.415 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:56.415 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:56.674 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:56.674 "name": "pt1", 00:28:56.674 "aliases": [ 00:28:56.674 "00000000-0000-0000-0000-000000000001" 00:28:56.674 ], 00:28:56.674 "product_name": "passthru", 00:28:56.674 "block_size": 512, 00:28:56.674 "num_blocks": 65536, 00:28:56.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:56.674 "assigned_rate_limits": { 00:28:56.674 "rw_ios_per_sec": 0, 00:28:56.674 "rw_mbytes_per_sec": 0, 00:28:56.674 "r_mbytes_per_sec": 0, 00:28:56.674 "w_mbytes_per_sec": 0 00:28:56.674 }, 00:28:56.674 "claimed": true, 00:28:56.674 "claim_type": "exclusive_write", 00:28:56.674 "zoned": false, 00:28:56.674 "supported_io_types": { 00:28:56.674 "read": true, 00:28:56.674 "write": true, 00:28:56.674 "unmap": true, 00:28:56.674 "flush": true, 00:28:56.674 "reset": true, 00:28:56.674 "nvme_admin": false, 00:28:56.674 "nvme_io": false, 00:28:56.674 "nvme_io_md": false, 00:28:56.674 "write_zeroes": true, 00:28:56.674 "zcopy": true, 00:28:56.674 "get_zone_info": false, 00:28:56.674 "zone_management": false, 00:28:56.674 "zone_append": false, 00:28:56.674 "compare": false, 00:28:56.674 "compare_and_write": false, 00:28:56.674 "abort": true, 00:28:56.674 "seek_hole": false, 00:28:56.674 "seek_data": false, 00:28:56.674 "copy": true, 00:28:56.674 "nvme_iov_md": false 00:28:56.674 }, 00:28:56.674 "memory_domains": [ 00:28:56.674 { 00:28:56.674 "dma_device_id": "system", 00:28:56.674 "dma_device_type": 1 00:28:56.674 }, 00:28:56.674 { 00:28:56.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.674 "dma_device_type": 2 00:28:56.674 } 00:28:56.674 ], 00:28:56.674 "driver_specific": { 00:28:56.674 "passthru": { 00:28:56.674 "name": "pt1", 00:28:56.674 "base_bdev_name": "malloc1" 00:28:56.674 } 00:28:56.674 } 00:28:56.674 }' 00:28:56.674 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:56.933 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:56.933 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:56.933 08:55:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:56.933 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:56.933 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:56.933 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:57.192 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:57.451 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:57.451 "name": "pt2", 00:28:57.451 "aliases": [ 00:28:57.451 "00000000-0000-0000-0000-000000000002" 00:28:57.451 ], 00:28:57.451 "product_name": "passthru", 00:28:57.451 "block_size": 512, 00:28:57.451 "num_blocks": 65536, 00:28:57.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:57.451 "assigned_rate_limits": { 00:28:57.451 "rw_ios_per_sec": 0, 00:28:57.451 "rw_mbytes_per_sec": 0, 00:28:57.451 "r_mbytes_per_sec": 0, 00:28:57.451 "w_mbytes_per_sec": 0 00:28:57.451 }, 00:28:57.451 "claimed": true, 00:28:57.451 "claim_type": "exclusive_write", 00:28:57.451 "zoned": false, 00:28:57.451 "supported_io_types": { 00:28:57.451 "read": true, 00:28:57.451 "write": true, 00:28:57.451 "unmap": true, 00:28:57.451 "flush": true, 00:28:57.451 "reset": true, 00:28:57.451 "nvme_admin": false, 00:28:57.451 "nvme_io": false, 00:28:57.451 "nvme_io_md": false, 00:28:57.451 "write_zeroes": true, 00:28:57.451 "zcopy": true, 00:28:57.451 "get_zone_info": false, 00:28:57.451 "zone_management": false, 00:28:57.451 "zone_append": false, 00:28:57.451 "compare": false, 00:28:57.451 "compare_and_write": false, 00:28:57.451 "abort": true, 00:28:57.451 "seek_hole": false, 00:28:57.451 "seek_data": false, 00:28:57.451 "copy": true, 00:28:57.451 "nvme_iov_md": false 00:28:57.451 }, 00:28:57.451 "memory_domains": [ 00:28:57.451 { 00:28:57.451 "dma_device_id": "system", 00:28:57.451 "dma_device_type": 1 00:28:57.451 }, 00:28:57.451 { 00:28:57.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.451 "dma_device_type": 2 00:28:57.451 } 00:28:57.451 ], 00:28:57.451 "driver_specific": { 00:28:57.451 "passthru": { 00:28:57.451 "name": "pt2", 00:28:57.451 "base_bdev_name": "malloc2" 00:28:57.451 } 00:28:57.451 } 00:28:57.451 }' 00:28:57.451 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:57.712 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:57.712 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:57.712 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:57.712 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:57.712 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:57.712 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:57.712 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:57.982 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:57.982 08:55:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.982 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.982 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:57.982 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:57.982 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:57.982 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:58.267 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:58.267 "name": "pt3", 00:28:58.267 "aliases": [ 00:28:58.267 "00000000-0000-0000-0000-000000000003" 00:28:58.267 ], 00:28:58.267 "product_name": "passthru", 00:28:58.267 "block_size": 512, 00:28:58.267 "num_blocks": 65536, 00:28:58.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:58.267 "assigned_rate_limits": { 00:28:58.267 "rw_ios_per_sec": 0, 00:28:58.267 "rw_mbytes_per_sec": 0, 00:28:58.267 "r_mbytes_per_sec": 0, 00:28:58.267 "w_mbytes_per_sec": 0 00:28:58.267 }, 00:28:58.267 "claimed": true, 00:28:58.267 "claim_type": "exclusive_write", 00:28:58.267 "zoned": false, 00:28:58.267 "supported_io_types": { 00:28:58.267 "read": true, 00:28:58.267 "write": true, 00:28:58.267 "unmap": true, 00:28:58.267 "flush": true, 00:28:58.267 "reset": true, 00:28:58.267 "nvme_admin": false, 00:28:58.267 "nvme_io": false, 00:28:58.267 "nvme_io_md": false, 00:28:58.267 "write_zeroes": true, 00:28:58.267 "zcopy": true, 00:28:58.267 "get_zone_info": false, 00:28:58.267 "zone_management": false, 00:28:58.267 "zone_append": false, 00:28:58.267 "compare": false, 00:28:58.267 "compare_and_write": false, 00:28:58.267 "abort": true, 00:28:58.267 "seek_hole": false, 00:28:58.267 "seek_data": false, 00:28:58.267 "copy": true, 00:28:58.267 "nvme_iov_md": false 00:28:58.267 }, 00:28:58.267 "memory_domains": [ 00:28:58.267 { 00:28:58.267 "dma_device_id": "system", 00:28:58.267 "dma_device_type": 1 00:28:58.267 }, 00:28:58.267 { 00:28:58.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.267 "dma_device_type": 2 00:28:58.267 } 00:28:58.267 ], 00:28:58.267 "driver_specific": { 00:28:58.267 "passthru": { 00:28:58.267 "name": "pt3", 00:28:58.267 "base_bdev_name": "malloc3" 00:28:58.267 } 00:28:58.267 } 00:28:58.267 }' 00:28:58.267 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.267 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.533 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.792 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.792 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:58.792 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:58.792 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:58.792 08:55:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:28:59.050 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:59.050 "name": "pt4", 00:28:59.050 "aliases": [ 00:28:59.050 "00000000-0000-0000-0000-000000000004" 00:28:59.050 ], 00:28:59.050 "product_name": "passthru", 00:28:59.050 "block_size": 512, 00:28:59.050 "num_blocks": 65536, 00:28:59.050 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:59.050 "assigned_rate_limits": { 00:28:59.050 "rw_ios_per_sec": 0, 00:28:59.050 "rw_mbytes_per_sec": 0, 00:28:59.050 "r_mbytes_per_sec": 0, 00:28:59.050 "w_mbytes_per_sec": 0 00:28:59.050 }, 00:28:59.050 "claimed": true, 00:28:59.051 "claim_type": "exclusive_write", 00:28:59.051 "zoned": false, 00:28:59.051 "supported_io_types": { 00:28:59.051 "read": true, 00:28:59.051 "write": true, 00:28:59.051 "unmap": true, 00:28:59.051 "flush": true, 00:28:59.051 "reset": true, 00:28:59.051 "nvme_admin": false, 00:28:59.051 "nvme_io": false, 00:28:59.051 "nvme_io_md": false, 00:28:59.051 "write_zeroes": true, 00:28:59.051 "zcopy": true, 00:28:59.051 "get_zone_info": false, 00:28:59.051 "zone_management": false, 00:28:59.051 "zone_append": false, 00:28:59.051 "compare": false, 00:28:59.051 "compare_and_write": false, 00:28:59.051 "abort": true, 00:28:59.051 "seek_hole": false, 00:28:59.051 "seek_data": false, 00:28:59.051 "copy": true, 00:28:59.051 "nvme_iov_md": false 00:28:59.051 }, 00:28:59.051 "memory_domains": [ 00:28:59.051 { 00:28:59.051 "dma_device_id": "system", 00:28:59.051 "dma_device_type": 1 00:28:59.051 }, 00:28:59.051 { 00:28:59.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.051 "dma_device_type": 2 00:28:59.051 } 00:28:59.051 ], 00:28:59.051 "driver_specific": { 00:28:59.051 "passthru": { 00:28:59.051 "name": "pt4", 00:28:59.051 "base_bdev_name": "malloc4" 00:28:59.051 } 00:28:59.051 } 00:28:59.051 }' 00:28:59.051 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.051 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.051 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:59.051 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.309 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.309 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:59.309 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.309 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.309 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:59.309 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.567 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.567 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:59.567 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:59.567 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:28:59.826 [2024-07-12 08:55:34.804335] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:59.826 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=5784f098-bb0b-4f63-850c-f22ec139358b 00:28:59.826 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 5784f098-bb0b-4f63-850c-f22ec139358b ']' 00:28:59.826 08:55:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:00.085 [2024-07-12 08:55:35.044079] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:00.085 [2024-07-12 08:55:35.044415] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:00.085 [2024-07-12 08:55:35.044609] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:00.085 [2024-07-12 08:55:35.044840] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:00.085 [2024-07-12 08:55:35.044942] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:29:00.085 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.085 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:29:00.343 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:29:00.343 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:29:00.343 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.343 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:00.602 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.602 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:00.861 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.861 08:55:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:01.120 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:29:01.120 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:01.379 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:01.638 [2024-07-12 08:55:36.776678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:01.638 [2024-07-12 08:55:36.779033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:01.638 [2024-07-12 08:55:36.779255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:01.638 [2024-07-12 08:55:36.779342] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:29:01.638 [2024-07-12 08:55:36.779571] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:01.638 [2024-07-12 08:55:36.779814] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:01.638 [2024-07-12 08:55:36.779984] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:01.638 [2024-07-12 08:55:36.780063] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:29:01.638 [2024-07-12 08:55:36.780161] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:01.638 [2024-07-12 08:55:36.780300] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:29:01.638 request: 00:29:01.638 { 00:29:01.638 "name": "raid_bdev1", 00:29:01.638 "raid_level": "raid1", 00:29:01.638 "base_bdevs": [ 00:29:01.638 "malloc1", 00:29:01.638 "malloc2", 00:29:01.638 "malloc3", 00:29:01.638 "malloc4" 00:29:01.638 ], 00:29:01.638 "superblock": false, 00:29:01.638 "method": "bdev_raid_create", 00:29:01.638 "req_id": 1 00:29:01.638 } 00:29:01.638 Got JSON-RPC error response 00:29:01.638 response: 00:29:01.638 { 00:29:01.638 "code": -17, 00:29:01.638 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:01.638 } 00:29:01.638 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:29:01.638 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.638 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.638 08:55:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.638 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.638 08:55:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:29:01.897 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:29:01.897 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:29:01.897 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:02.156 [2024-07-12 08:55:37.268993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:02.156 [2024-07-12 08:55:37.269443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.156 [2024-07-12 08:55:37.269515] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:02.156 [2024-07-12 08:55:37.269827] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.156 [2024-07-12 08:55:37.272393] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.156 [2024-07-12 08:55:37.272566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:02.156 [2024-07-12 08:55:37.272795] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:02.156 [2024-07-12 08:55:37.272972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:02.156 pt1 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.156 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.415 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.415 "name": "raid_bdev1", 00:29:02.415 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:02.415 "strip_size_kb": 0, 00:29:02.415 "state": "configuring", 00:29:02.415 "raid_level": "raid1", 00:29:02.415 "superblock": true, 00:29:02.415 "num_base_bdevs": 4, 00:29:02.415 "num_base_bdevs_discovered": 1, 00:29:02.415 "num_base_bdevs_operational": 4, 00:29:02.415 "base_bdevs_list": [ 00:29:02.415 { 00:29:02.415 "name": "pt1", 00:29:02.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:02.415 "is_configured": true, 00:29:02.415 "data_offset": 2048, 00:29:02.415 "data_size": 63488 00:29:02.415 }, 00:29:02.415 { 00:29:02.415 "name": null, 00:29:02.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:02.415 "is_configured": false, 00:29:02.415 "data_offset": 2048, 00:29:02.415 "data_size": 63488 00:29:02.415 }, 00:29:02.415 { 00:29:02.415 "name": null, 00:29:02.415 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:02.415 "is_configured": false, 00:29:02.415 "data_offset": 2048, 00:29:02.415 "data_size": 63488 00:29:02.415 }, 00:29:02.415 { 00:29:02.415 "name": null, 00:29:02.415 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:02.415 "is_configured": false, 00:29:02.415 "data_offset": 2048, 00:29:02.415 "data_size": 63488 00:29:02.415 } 00:29:02.415 ] 00:29:02.415 }' 00:29:02.415 08:55:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.415 08:55:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:03.350 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:29:03.350 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:03.350 [2024-07-12 08:55:38.485736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:03.350 [2024-07-12 08:55:38.486252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.350 [2024-07-12 08:55:38.486466] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:03.350 [2024-07-12 08:55:38.486632] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.350 [2024-07-12 08:55:38.487368] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.350 [2024-07-12 08:55:38.487538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:03.350 [2024-07-12 08:55:38.487771] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:03.350 [2024-07-12 08:55:38.487912] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:03.350 pt2 00:29:03.350 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:03.610 [2024-07-12 08:55:38.717982] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.610 08:55:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.869 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.869 "name": "raid_bdev1", 00:29:03.869 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:03.869 "strip_size_kb": 0, 00:29:03.869 "state": "configuring", 00:29:03.869 "raid_level": "raid1", 00:29:03.869 "superblock": true, 00:29:03.869 "num_base_bdevs": 4, 00:29:03.869 "num_base_bdevs_discovered": 1, 00:29:03.869 "num_base_bdevs_operational": 4, 00:29:03.869 "base_bdevs_list": [ 00:29:03.869 { 00:29:03.869 "name": "pt1", 00:29:03.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:03.869 "is_configured": true, 00:29:03.869 "data_offset": 2048, 00:29:03.869 "data_size": 63488 00:29:03.869 }, 00:29:03.869 { 00:29:03.869 "name": null, 00:29:03.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:03.869 "is_configured": false, 00:29:03.869 "data_offset": 2048, 00:29:03.869 "data_size": 63488 00:29:03.869 }, 00:29:03.869 { 00:29:03.869 "name": null, 00:29:03.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:03.869 "is_configured": false, 00:29:03.869 "data_offset": 2048, 00:29:03.869 "data_size": 63488 00:29:03.869 }, 00:29:03.869 { 00:29:03.869 "name": null, 00:29:03.869 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:03.869 "is_configured": false, 00:29:03.869 "data_offset": 2048, 00:29:03.869 "data_size": 63488 00:29:03.869 } 00:29:03.869 ] 00:29:03.869 }' 00:29:03.869 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.869 08:55:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.806 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:29:04.806 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:04.806 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:04.806 [2024-07-12 08:55:39.926205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:04.806 [2024-07-12 08:55:39.926644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.806 [2024-07-12 08:55:39.926842] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:04.806 [2024-07-12 08:55:39.926991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.806 [2024-07-12 08:55:39.927730] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.806 [2024-07-12 08:55:39.927894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:04.806 [2024-07-12 08:55:39.928144] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:04.806 [2024-07-12 08:55:39.928303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:04.806 pt2 00:29:04.806 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:04.806 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:04.806 08:55:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:05.065 [2024-07-12 08:55:40.142258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:05.065 [2024-07-12 08:55:40.142734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.065 [2024-07-12 08:55:40.142931] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:05.065 [2024-07-12 08:55:40.143083] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.065 [2024-07-12 08:55:40.143850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.065 [2024-07-12 08:55:40.144043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:05.065 [2024-07-12 08:55:40.144277] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:05.065 [2024-07-12 08:55:40.144413] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:05.065 pt3 00:29:05.065 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:05.065 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:05.065 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:05.324 [2024-07-12 08:55:40.358269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:05.324 [2024-07-12 08:55:40.358738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.324 [2024-07-12 08:55:40.358818] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:05.324 [2024-07-12 08:55:40.359091] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.324 [2024-07-12 08:55:40.359824] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.324 [2024-07-12 08:55:40.360008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:05.324 [2024-07-12 08:55:40.360228] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:05.324 [2024-07-12 08:55:40.360376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:05.324 [2024-07-12 08:55:40.360679] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:29:05.324 [2024-07-12 08:55:40.360819] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:05.324 [2024-07-12 08:55:40.360977] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:05.324 [2024-07-12 08:55:40.361510] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:29:05.324 [2024-07-12 08:55:40.361624] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:29:05.324 [2024-07-12 08:55:40.361862] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:05.324 pt4 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.324 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.583 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.583 "name": "raid_bdev1", 00:29:05.583 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:05.583 "strip_size_kb": 0, 00:29:05.583 "state": "online", 00:29:05.583 "raid_level": "raid1", 00:29:05.583 "superblock": true, 00:29:05.583 "num_base_bdevs": 4, 00:29:05.583 "num_base_bdevs_discovered": 4, 00:29:05.583 "num_base_bdevs_operational": 4, 00:29:05.583 "base_bdevs_list": [ 00:29:05.583 { 00:29:05.583 "name": "pt1", 00:29:05.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:05.583 "is_configured": true, 00:29:05.583 "data_offset": 2048, 00:29:05.583 "data_size": 63488 00:29:05.583 }, 00:29:05.583 { 00:29:05.583 "name": "pt2", 00:29:05.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:05.583 "is_configured": true, 00:29:05.583 "data_offset": 2048, 00:29:05.583 "data_size": 63488 00:29:05.583 }, 00:29:05.583 { 00:29:05.583 "name": "pt3", 00:29:05.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:05.583 "is_configured": true, 00:29:05.583 "data_offset": 2048, 00:29:05.583 "data_size": 63488 00:29:05.583 }, 00:29:05.583 { 00:29:05.583 "name": "pt4", 00:29:05.583 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:05.583 "is_configured": true, 00:29:05.583 "data_offset": 2048, 00:29:05.583 "data_size": 63488 00:29:05.583 } 00:29:05.583 ] 00:29:05.583 }' 00:29:05.583 08:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.583 08:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:06.150 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:06.408 [2024-07-12 08:55:41.550940] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:06.408 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:06.408 "name": "raid_bdev1", 00:29:06.408 "aliases": [ 00:29:06.408 "5784f098-bb0b-4f63-850c-f22ec139358b" 00:29:06.408 ], 00:29:06.408 "product_name": "Raid Volume", 00:29:06.408 "block_size": 512, 00:29:06.408 "num_blocks": 63488, 00:29:06.408 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:06.408 "assigned_rate_limits": { 00:29:06.408 "rw_ios_per_sec": 0, 00:29:06.408 "rw_mbytes_per_sec": 0, 00:29:06.408 "r_mbytes_per_sec": 0, 00:29:06.408 "w_mbytes_per_sec": 0 00:29:06.408 }, 00:29:06.408 "claimed": false, 00:29:06.408 "zoned": false, 00:29:06.408 "supported_io_types": { 00:29:06.408 "read": true, 00:29:06.408 "write": true, 00:29:06.408 "unmap": false, 00:29:06.408 "flush": false, 00:29:06.408 "reset": true, 00:29:06.408 "nvme_admin": false, 00:29:06.408 "nvme_io": false, 00:29:06.408 "nvme_io_md": false, 00:29:06.408 "write_zeroes": true, 00:29:06.408 "zcopy": false, 00:29:06.408 "get_zone_info": false, 00:29:06.408 "zone_management": false, 00:29:06.408 "zone_append": false, 00:29:06.408 "compare": false, 00:29:06.408 "compare_and_write": false, 00:29:06.408 "abort": false, 00:29:06.408 "seek_hole": false, 00:29:06.408 "seek_data": false, 00:29:06.408 "copy": false, 00:29:06.408 "nvme_iov_md": false 00:29:06.408 }, 00:29:06.408 "memory_domains": [ 00:29:06.408 { 00:29:06.408 "dma_device_id": "system", 00:29:06.408 "dma_device_type": 1 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.408 "dma_device_type": 2 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "dma_device_id": "system", 00:29:06.408 "dma_device_type": 1 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.408 "dma_device_type": 2 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "dma_device_id": "system", 00:29:06.408 "dma_device_type": 1 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.408 "dma_device_type": 2 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "dma_device_id": "system", 00:29:06.408 "dma_device_type": 1 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.408 "dma_device_type": 2 00:29:06.408 } 00:29:06.408 ], 00:29:06.408 "driver_specific": { 00:29:06.408 "raid": { 00:29:06.408 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:06.408 "strip_size_kb": 0, 00:29:06.408 "state": "online", 00:29:06.408 "raid_level": "raid1", 00:29:06.408 "superblock": true, 00:29:06.408 "num_base_bdevs": 4, 00:29:06.408 "num_base_bdevs_discovered": 4, 00:29:06.408 "num_base_bdevs_operational": 4, 00:29:06.408 "base_bdevs_list": [ 00:29:06.408 { 00:29:06.408 "name": "pt1", 00:29:06.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:06.408 "is_configured": true, 00:29:06.408 "data_offset": 2048, 00:29:06.408 "data_size": 63488 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "name": "pt2", 00:29:06.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:06.408 "is_configured": true, 00:29:06.408 "data_offset": 2048, 00:29:06.408 "data_size": 63488 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "name": "pt3", 00:29:06.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:06.408 "is_configured": true, 00:29:06.408 "data_offset": 2048, 00:29:06.408 "data_size": 63488 00:29:06.408 }, 00:29:06.408 { 00:29:06.408 "name": "pt4", 00:29:06.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:06.408 "is_configured": true, 00:29:06.408 "data_offset": 2048, 00:29:06.408 "data_size": 63488 00:29:06.408 } 00:29:06.408 ] 00:29:06.408 } 00:29:06.408 } 00:29:06.408 }' 00:29:06.408 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:06.666 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:06.666 pt2 00:29:06.666 pt3 00:29:06.666 pt4' 00:29:06.666 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:06.666 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:06.666 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:06.666 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:06.666 "name": "pt1", 00:29:06.666 "aliases": [ 00:29:06.666 "00000000-0000-0000-0000-000000000001" 00:29:06.666 ], 00:29:06.666 "product_name": "passthru", 00:29:06.666 "block_size": 512, 00:29:06.666 "num_blocks": 65536, 00:29:06.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:06.666 "assigned_rate_limits": { 00:29:06.666 "rw_ios_per_sec": 0, 00:29:06.666 "rw_mbytes_per_sec": 0, 00:29:06.666 "r_mbytes_per_sec": 0, 00:29:06.666 "w_mbytes_per_sec": 0 00:29:06.666 }, 00:29:06.666 "claimed": true, 00:29:06.666 "claim_type": "exclusive_write", 00:29:06.666 "zoned": false, 00:29:06.666 "supported_io_types": { 00:29:06.666 "read": true, 00:29:06.666 "write": true, 00:29:06.666 "unmap": true, 00:29:06.666 "flush": true, 00:29:06.666 "reset": true, 00:29:06.666 "nvme_admin": false, 00:29:06.666 "nvme_io": false, 00:29:06.666 "nvme_io_md": false, 00:29:06.666 "write_zeroes": true, 00:29:06.666 "zcopy": true, 00:29:06.666 "get_zone_info": false, 00:29:06.666 "zone_management": false, 00:29:06.666 "zone_append": false, 00:29:06.666 "compare": false, 00:29:06.666 "compare_and_write": false, 00:29:06.666 "abort": true, 00:29:06.666 "seek_hole": false, 00:29:06.666 "seek_data": false, 00:29:06.666 "copy": true, 00:29:06.666 "nvme_iov_md": false 00:29:06.666 }, 00:29:06.666 "memory_domains": [ 00:29:06.666 { 00:29:06.666 "dma_device_id": "system", 00:29:06.666 "dma_device_type": 1 00:29:06.666 }, 00:29:06.666 { 00:29:06.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:06.666 "dma_device_type": 2 00:29:06.666 } 00:29:06.666 ], 00:29:06.666 "driver_specific": { 00:29:06.666 "passthru": { 00:29:06.666 "name": "pt1", 00:29:06.666 "base_bdev_name": "malloc1" 00:29:06.666 } 00:29:06.666 } 00:29:06.666 }' 00:29:06.666 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:06.925 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:06.925 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:06.925 08:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:06.925 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:06.925 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:06.925 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:07.184 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:07.443 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:07.443 "name": "pt2", 00:29:07.443 "aliases": [ 00:29:07.443 "00000000-0000-0000-0000-000000000002" 00:29:07.443 ], 00:29:07.443 "product_name": "passthru", 00:29:07.443 "block_size": 512, 00:29:07.443 "num_blocks": 65536, 00:29:07.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:07.443 "assigned_rate_limits": { 00:29:07.443 "rw_ios_per_sec": 0, 00:29:07.443 "rw_mbytes_per_sec": 0, 00:29:07.443 "r_mbytes_per_sec": 0, 00:29:07.443 "w_mbytes_per_sec": 0 00:29:07.443 }, 00:29:07.443 "claimed": true, 00:29:07.443 "claim_type": "exclusive_write", 00:29:07.443 "zoned": false, 00:29:07.443 "supported_io_types": { 00:29:07.443 "read": true, 00:29:07.443 "write": true, 00:29:07.443 "unmap": true, 00:29:07.443 "flush": true, 00:29:07.443 "reset": true, 00:29:07.443 "nvme_admin": false, 00:29:07.443 "nvme_io": false, 00:29:07.443 "nvme_io_md": false, 00:29:07.443 "write_zeroes": true, 00:29:07.443 "zcopy": true, 00:29:07.443 "get_zone_info": false, 00:29:07.443 "zone_management": false, 00:29:07.443 "zone_append": false, 00:29:07.443 "compare": false, 00:29:07.443 "compare_and_write": false, 00:29:07.443 "abort": true, 00:29:07.443 "seek_hole": false, 00:29:07.443 "seek_data": false, 00:29:07.443 "copy": true, 00:29:07.443 "nvme_iov_md": false 00:29:07.443 }, 00:29:07.443 "memory_domains": [ 00:29:07.443 { 00:29:07.443 "dma_device_id": "system", 00:29:07.443 "dma_device_type": 1 00:29:07.443 }, 00:29:07.443 { 00:29:07.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.443 "dma_device_type": 2 00:29:07.443 } 00:29:07.443 ], 00:29:07.443 "driver_specific": { 00:29:07.443 "passthru": { 00:29:07.443 "name": "pt2", 00:29:07.443 "base_bdev_name": "malloc2" 00:29:07.443 } 00:29:07.443 } 00:29:07.443 }' 00:29:07.443 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.701 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.701 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:07.701 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.701 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.701 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:07.701 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.701 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.960 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:07.960 08:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.960 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.960 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:07.960 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.960 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:07.960 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:08.219 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:08.219 "name": "pt3", 00:29:08.219 "aliases": [ 00:29:08.219 "00000000-0000-0000-0000-000000000003" 00:29:08.219 ], 00:29:08.219 "product_name": "passthru", 00:29:08.219 "block_size": 512, 00:29:08.219 "num_blocks": 65536, 00:29:08.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:08.219 "assigned_rate_limits": { 00:29:08.219 "rw_ios_per_sec": 0, 00:29:08.219 "rw_mbytes_per_sec": 0, 00:29:08.219 "r_mbytes_per_sec": 0, 00:29:08.219 "w_mbytes_per_sec": 0 00:29:08.219 }, 00:29:08.219 "claimed": true, 00:29:08.219 "claim_type": "exclusive_write", 00:29:08.219 "zoned": false, 00:29:08.219 "supported_io_types": { 00:29:08.219 "read": true, 00:29:08.219 "write": true, 00:29:08.219 "unmap": true, 00:29:08.219 "flush": true, 00:29:08.219 "reset": true, 00:29:08.219 "nvme_admin": false, 00:29:08.219 "nvme_io": false, 00:29:08.219 "nvme_io_md": false, 00:29:08.219 "write_zeroes": true, 00:29:08.219 "zcopy": true, 00:29:08.219 "get_zone_info": false, 00:29:08.219 "zone_management": false, 00:29:08.219 "zone_append": false, 00:29:08.219 "compare": false, 00:29:08.219 "compare_and_write": false, 00:29:08.219 "abort": true, 00:29:08.219 "seek_hole": false, 00:29:08.219 "seek_data": false, 00:29:08.219 "copy": true, 00:29:08.219 "nvme_iov_md": false 00:29:08.219 }, 00:29:08.219 "memory_domains": [ 00:29:08.219 { 00:29:08.219 "dma_device_id": "system", 00:29:08.219 "dma_device_type": 1 00:29:08.219 }, 00:29:08.219 { 00:29:08.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.219 "dma_device_type": 2 00:29:08.219 } 00:29:08.219 ], 00:29:08.219 "driver_specific": { 00:29:08.219 "passthru": { 00:29:08.219 "name": "pt3", 00:29:08.219 "base_bdev_name": "malloc3" 00:29:08.219 } 00:29:08.219 } 00:29:08.219 }' 00:29:08.219 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.219 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:08.478 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.736 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.736 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:08.736 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:08.736 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:29:08.736 08:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:08.994 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:08.994 "name": "pt4", 00:29:08.994 "aliases": [ 00:29:08.994 "00000000-0000-0000-0000-000000000004" 00:29:08.994 ], 00:29:08.994 "product_name": "passthru", 00:29:08.994 "block_size": 512, 00:29:08.994 "num_blocks": 65536, 00:29:08.994 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:08.994 "assigned_rate_limits": { 00:29:08.994 "rw_ios_per_sec": 0, 00:29:08.994 "rw_mbytes_per_sec": 0, 00:29:08.994 "r_mbytes_per_sec": 0, 00:29:08.994 "w_mbytes_per_sec": 0 00:29:08.994 }, 00:29:08.994 "claimed": true, 00:29:08.994 "claim_type": "exclusive_write", 00:29:08.994 "zoned": false, 00:29:08.994 "supported_io_types": { 00:29:08.994 "read": true, 00:29:08.994 "write": true, 00:29:08.994 "unmap": true, 00:29:08.994 "flush": true, 00:29:08.994 "reset": true, 00:29:08.994 "nvme_admin": false, 00:29:08.994 "nvme_io": false, 00:29:08.994 "nvme_io_md": false, 00:29:08.994 "write_zeroes": true, 00:29:08.994 "zcopy": true, 00:29:08.994 "get_zone_info": false, 00:29:08.994 "zone_management": false, 00:29:08.994 "zone_append": false, 00:29:08.994 "compare": false, 00:29:08.994 "compare_and_write": false, 00:29:08.994 "abort": true, 00:29:08.994 "seek_hole": false, 00:29:08.994 "seek_data": false, 00:29:08.994 "copy": true, 00:29:08.994 "nvme_iov_md": false 00:29:08.994 }, 00:29:08.994 "memory_domains": [ 00:29:08.994 { 00:29:08.994 "dma_device_id": "system", 00:29:08.994 "dma_device_type": 1 00:29:08.994 }, 00:29:08.994 { 00:29:08.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.994 "dma_device_type": 2 00:29:08.994 } 00:29:08.994 ], 00:29:08.994 "driver_specific": { 00:29:08.994 "passthru": { 00:29:08.994 "name": "pt4", 00:29:08.994 "base_bdev_name": "malloc4" 00:29:08.994 } 00:29:08.994 } 00:29:08.994 }' 00:29:08.994 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.994 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:09.252 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:09.510 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:09.510 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:09.510 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:09.510 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:29:09.767 [2024-07-12 08:55:44.779621] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:09.767 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 5784f098-bb0b-4f63-850c-f22ec139358b '!=' 5784f098-bb0b-4f63-850c-f22ec139358b ']' 00:29:09.767 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:29:09.767 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:09.767 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:09.768 08:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:10.026 [2024-07-12 08:55:45.071401] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.026 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.284 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:10.284 "name": "raid_bdev1", 00:29:10.284 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:10.284 "strip_size_kb": 0, 00:29:10.284 "state": "online", 00:29:10.284 "raid_level": "raid1", 00:29:10.284 "superblock": true, 00:29:10.284 "num_base_bdevs": 4, 00:29:10.284 "num_base_bdevs_discovered": 3, 00:29:10.284 "num_base_bdevs_operational": 3, 00:29:10.284 "base_bdevs_list": [ 00:29:10.284 { 00:29:10.284 "name": null, 00:29:10.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.284 "is_configured": false, 00:29:10.284 "data_offset": 2048, 00:29:10.284 "data_size": 63488 00:29:10.284 }, 00:29:10.284 { 00:29:10.284 "name": "pt2", 00:29:10.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:10.284 "is_configured": true, 00:29:10.284 "data_offset": 2048, 00:29:10.284 "data_size": 63488 00:29:10.284 }, 00:29:10.284 { 00:29:10.284 "name": "pt3", 00:29:10.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:10.284 "is_configured": true, 00:29:10.284 "data_offset": 2048, 00:29:10.284 "data_size": 63488 00:29:10.284 }, 00:29:10.284 { 00:29:10.284 "name": "pt4", 00:29:10.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:10.284 "is_configured": true, 00:29:10.284 "data_offset": 2048, 00:29:10.284 "data_size": 63488 00:29:10.284 } 00:29:10.284 ] 00:29:10.284 }' 00:29:10.284 08:55:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:10.284 08:55:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.219 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:11.219 [2024-07-12 08:55:46.271626] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:11.219 [2024-07-12 08:55:46.271687] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:11.219 [2024-07-12 08:55:46.271811] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:11.219 [2024-07-12 08:55:46.271912] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:11.219 [2024-07-12 08:55:46.272154] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:29:11.219 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.219 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:29:11.478 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:29:11.478 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:29:11.478 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:29:11.478 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:11.478 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:11.736 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:11.736 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:11.736 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:11.995 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:11.995 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:11.995 08:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:12.253 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:29:12.253 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:29:12.253 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:29:12.253 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:12.253 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:12.512 [2024-07-12 08:55:47.456043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:12.513 [2024-07-12 08:55:47.456224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.513 [2024-07-12 08:55:47.456266] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:29:12.513 [2024-07-12 08:55:47.456330] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.513 [2024-07-12 08:55:47.459543] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.513 [2024-07-12 08:55:47.459614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:12.513 [2024-07-12 08:55:47.459766] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:12.513 [2024-07-12 08:55:47.460128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:12.513 pt2 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:12.513 "name": "raid_bdev1", 00:29:12.513 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:12.513 "strip_size_kb": 0, 00:29:12.513 "state": "configuring", 00:29:12.513 "raid_level": "raid1", 00:29:12.513 "superblock": true, 00:29:12.513 "num_base_bdevs": 4, 00:29:12.513 "num_base_bdevs_discovered": 1, 00:29:12.513 "num_base_bdevs_operational": 3, 00:29:12.513 "base_bdevs_list": [ 00:29:12.513 { 00:29:12.513 "name": null, 00:29:12.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.513 "is_configured": false, 00:29:12.513 "data_offset": 2048, 00:29:12.513 "data_size": 63488 00:29:12.513 }, 00:29:12.513 { 00:29:12.513 "name": "pt2", 00:29:12.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:12.513 "is_configured": true, 00:29:12.513 "data_offset": 2048, 00:29:12.513 "data_size": 63488 00:29:12.513 }, 00:29:12.513 { 00:29:12.513 "name": null, 00:29:12.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:12.513 "is_configured": false, 00:29:12.513 "data_offset": 2048, 00:29:12.513 "data_size": 63488 00:29:12.513 }, 00:29:12.513 { 00:29:12.513 "name": null, 00:29:12.513 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:12.513 "is_configured": false, 00:29:12.513 "data_offset": 2048, 00:29:12.513 "data_size": 63488 00:29:12.513 } 00:29:12.513 ] 00:29:12.513 }' 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:12.513 08:55:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.447 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:29:13.448 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:13.448 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:13.706 [2024-07-12 08:55:48.697498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:13.706 [2024-07-12 08:55:48.697646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.706 [2024-07-12 08:55:48.697725] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:29:13.706 [2024-07-12 08:55:48.697773] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.706 [2024-07-12 08:55:48.698761] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.706 [2024-07-12 08:55:48.698804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:13.706 [2024-07-12 08:55:48.698934] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:13.706 [2024-07-12 08:55:48.698968] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:13.706 pt3 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.706 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.964 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:13.964 "name": "raid_bdev1", 00:29:13.964 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:13.964 "strip_size_kb": 0, 00:29:13.964 "state": "configuring", 00:29:13.964 "raid_level": "raid1", 00:29:13.964 "superblock": true, 00:29:13.964 "num_base_bdevs": 4, 00:29:13.964 "num_base_bdevs_discovered": 2, 00:29:13.964 "num_base_bdevs_operational": 3, 00:29:13.964 "base_bdevs_list": [ 00:29:13.964 { 00:29:13.964 "name": null, 00:29:13.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.964 "is_configured": false, 00:29:13.964 "data_offset": 2048, 00:29:13.964 "data_size": 63488 00:29:13.964 }, 00:29:13.964 { 00:29:13.964 "name": "pt2", 00:29:13.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:13.964 "is_configured": true, 00:29:13.964 "data_offset": 2048, 00:29:13.964 "data_size": 63488 00:29:13.964 }, 00:29:13.964 { 00:29:13.964 "name": "pt3", 00:29:13.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:13.964 "is_configured": true, 00:29:13.964 "data_offset": 2048, 00:29:13.964 "data_size": 63488 00:29:13.964 }, 00:29:13.964 { 00:29:13.964 "name": null, 00:29:13.964 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:13.964 "is_configured": false, 00:29:13.964 "data_offset": 2048, 00:29:13.964 "data_size": 63488 00:29:13.964 } 00:29:13.964 ] 00:29:13.964 }' 00:29:13.964 08:55:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:13.964 08:55:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.532 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:29:14.532 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:29:14.532 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:29:14.532 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:14.791 [2024-07-12 08:55:49.886393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:14.791 [2024-07-12 08:55:49.886577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.791 [2024-07-12 08:55:49.886628] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:29:14.791 [2024-07-12 08:55:49.886655] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.791 [2024-07-12 08:55:49.887652] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.791 [2024-07-12 08:55:49.887715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:14.791 [2024-07-12 08:55:49.887841] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:14.791 [2024-07-12 08:55:49.887997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:14.791 [2024-07-12 08:55:49.888467] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:29:14.791 [2024-07-12 08:55:49.888493] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:14.791 [2024-07-12 08:55:49.888654] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:29:14.791 [2024-07-12 08:55:49.889305] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:29:14.791 [2024-07-12 08:55:49.889330] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:29:14.791 [2024-07-12 08:55:49.889648] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.791 pt4 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.791 08:55:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.049 08:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:15.049 "name": "raid_bdev1", 00:29:15.049 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:15.049 "strip_size_kb": 0, 00:29:15.049 "state": "online", 00:29:15.049 "raid_level": "raid1", 00:29:15.049 "superblock": true, 00:29:15.049 "num_base_bdevs": 4, 00:29:15.049 "num_base_bdevs_discovered": 3, 00:29:15.049 "num_base_bdevs_operational": 3, 00:29:15.049 "base_bdevs_list": [ 00:29:15.049 { 00:29:15.049 "name": null, 00:29:15.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.049 "is_configured": false, 00:29:15.049 "data_offset": 2048, 00:29:15.049 "data_size": 63488 00:29:15.049 }, 00:29:15.049 { 00:29:15.049 "name": "pt2", 00:29:15.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:15.049 "is_configured": true, 00:29:15.049 "data_offset": 2048, 00:29:15.049 "data_size": 63488 00:29:15.049 }, 00:29:15.049 { 00:29:15.049 "name": "pt3", 00:29:15.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:15.049 "is_configured": true, 00:29:15.049 "data_offset": 2048, 00:29:15.049 "data_size": 63488 00:29:15.049 }, 00:29:15.049 { 00:29:15.049 "name": "pt4", 00:29:15.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:15.049 "is_configured": true, 00:29:15.049 "data_offset": 2048, 00:29:15.049 "data_size": 63488 00:29:15.049 } 00:29:15.049 ] 00:29:15.049 }' 00:29:15.049 08:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:15.049 08:55:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.616 08:55:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:15.874 [2024-07-12 08:55:51.054563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:15.874 [2024-07-12 08:55:51.054619] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:15.874 [2024-07-12 08:55:51.054733] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:15.874 [2024-07-12 08:55:51.054846] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:15.874 [2024-07-12 08:55:51.054860] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:29:16.132 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.132 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:29:16.132 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:29:16.132 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:29:16.132 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:29:16.132 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:29:16.132 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:16.391 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:16.649 [2024-07-12 08:55:51.826666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:16.649 [2024-07-12 08:55:51.826785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.649 [2024-07-12 08:55:51.826831] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:29:16.649 [2024-07-12 08:55:51.826881] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.649 [2024-07-12 08:55:51.829601] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.649 [2024-07-12 08:55:51.829666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:16.649 [2024-07-12 08:55:51.829814] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:16.649 [2024-07-12 08:55:51.829890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:16.649 [2024-07-12 08:55:51.830043] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:16.649 [2024-07-12 08:55:51.830070] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:16.649 [2024-07-12 08:55:51.830103] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state configuring 00:29:16.649 [2024-07-12 08:55:51.830166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:16.649 [2024-07-12 08:55:51.830310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:16.649 pt1 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.907 08:55:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.165 08:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.165 "name": "raid_bdev1", 00:29:17.165 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:17.165 "strip_size_kb": 0, 00:29:17.165 "state": "configuring", 00:29:17.165 "raid_level": "raid1", 00:29:17.165 "superblock": true, 00:29:17.165 "num_base_bdevs": 4, 00:29:17.165 "num_base_bdevs_discovered": 2, 00:29:17.165 "num_base_bdevs_operational": 3, 00:29:17.165 "base_bdevs_list": [ 00:29:17.165 { 00:29:17.165 "name": null, 00:29:17.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.165 "is_configured": false, 00:29:17.165 "data_offset": 2048, 00:29:17.165 "data_size": 63488 00:29:17.165 }, 00:29:17.165 { 00:29:17.165 "name": "pt2", 00:29:17.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:17.165 "is_configured": true, 00:29:17.165 "data_offset": 2048, 00:29:17.165 "data_size": 63488 00:29:17.165 }, 00:29:17.165 { 00:29:17.165 "name": "pt3", 00:29:17.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:17.165 "is_configured": true, 00:29:17.165 "data_offset": 2048, 00:29:17.165 "data_size": 63488 00:29:17.165 }, 00:29:17.165 { 00:29:17.165 "name": null, 00:29:17.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:17.165 "is_configured": false, 00:29:17.165 "data_offset": 2048, 00:29:17.165 "data_size": 63488 00:29:17.165 } 00:29:17.165 ] 00:29:17.165 }' 00:29:17.165 08:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.165 08:55:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.732 08:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:29:17.732 08:55:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:17.999 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:29:18.000 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:18.263 [2024-07-12 08:55:53.319098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:18.263 [2024-07-12 08:55:53.319474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:18.263 [2024-07-12 08:55:53.319549] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:29:18.263 [2024-07-12 08:55:53.319796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:18.263 [2024-07-12 08:55:53.320519] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:18.263 [2024-07-12 08:55:53.320714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:18.263 [2024-07-12 08:55:53.320952] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:18.263 [2024-07-12 08:55:53.321094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:18.263 [2024-07-12 08:55:53.321344] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:29:18.263 [2024-07-12 08:55:53.321499] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:18.263 [2024-07-12 08:55:53.321652] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:29:18.263 [2024-07-12 08:55:53.322173] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:29:18.263 [2024-07-12 08:55:53.322337] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:29:18.263 [2024-07-12 08:55:53.322576] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:18.263 pt4 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.263 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.521 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.521 "name": "raid_bdev1", 00:29:18.521 "uuid": "5784f098-bb0b-4f63-850c-f22ec139358b", 00:29:18.521 "strip_size_kb": 0, 00:29:18.521 "state": "online", 00:29:18.521 "raid_level": "raid1", 00:29:18.521 "superblock": true, 00:29:18.521 "num_base_bdevs": 4, 00:29:18.521 "num_base_bdevs_discovered": 3, 00:29:18.521 "num_base_bdevs_operational": 3, 00:29:18.521 "base_bdevs_list": [ 00:29:18.521 { 00:29:18.521 "name": null, 00:29:18.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.521 "is_configured": false, 00:29:18.521 "data_offset": 2048, 00:29:18.521 "data_size": 63488 00:29:18.521 }, 00:29:18.521 { 00:29:18.521 "name": "pt2", 00:29:18.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:18.521 "is_configured": true, 00:29:18.521 "data_offset": 2048, 00:29:18.521 "data_size": 63488 00:29:18.521 }, 00:29:18.521 { 00:29:18.521 "name": "pt3", 00:29:18.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:18.521 "is_configured": true, 00:29:18.521 "data_offset": 2048, 00:29:18.521 "data_size": 63488 00:29:18.521 }, 00:29:18.521 { 00:29:18.521 "name": "pt4", 00:29:18.521 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:18.521 "is_configured": true, 00:29:18.521 "data_offset": 2048, 00:29:18.521 "data_size": 63488 00:29:18.521 } 00:29:18.521 ] 00:29:18.521 }' 00:29:18.521 08:55:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.521 08:55:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.088 08:55:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:19.088 08:55:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:29:19.655 [2024-07-12 08:55:54.752870] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 5784f098-bb0b-4f63-850c-f22ec139358b '!=' 5784f098-bb0b-4f63-850c-f22ec139358b ']' 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 144822 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 144822 ']' 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 144822 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144822 00:29:19.655 killing process with pid 144822 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144822' 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 144822 00:29:19.655 08:55:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 144822 00:29:19.655 [2024-07-12 08:55:54.790316] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:19.655 [2024-07-12 08:55:54.790453] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:19.655 [2024-07-12 08:55:54.790595] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:19.655 [2024-07-12 08:55:54.790738] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:29:20.220 [2024-07-12 08:55:55.109379] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:21.162 ************************************ 00:29:21.162 END TEST raid_superblock_test 00:29:21.162 ************************************ 00:29:21.162 08:55:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:29:21.162 00:29:21.162 real 0m29.180s 00:29:21.162 user 0m54.713s 00:29:21.162 sys 0m3.246s 00:29:21.162 08:55:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:21.162 08:55:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 08:55:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:21.162 08:55:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:29:21.162 08:55:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:29:21.162 08:55:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.162 08:55:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 ************************************ 00:29:21.162 START TEST raid_read_error_test 00:29:21.162 ************************************ 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.TXCegNd0UU 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=145750 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 145750 /var/tmp/spdk-raid.sock 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 145750 ']' 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:21.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:21.162 08:55:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.162 [2024-07-12 08:55:56.308422] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:29:21.162 [2024-07-12 08:55:56.308886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145750 ] 00:29:21.443 [2024-07-12 08:55:56.483080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.716 [2024-07-12 08:55:56.723112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.716 [2024-07-12 08:55:56.909199] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.283 08:55:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:22.283 08:55:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:29:22.283 08:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:22.283 08:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:22.542 BaseBdev1_malloc 00:29:22.542 08:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:29:22.542 true 00:29:22.800 08:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:22.801 [2024-07-12 08:55:57.945810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:22.801 [2024-07-12 08:55:57.946183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.801 [2024-07-12 08:55:57.946263] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:22.801 [2024-07-12 08:55:57.946571] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.801 [2024-07-12 08:55:57.949152] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.801 [2024-07-12 08:55:57.949328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:22.801 BaseBdev1 00:29:22.801 08:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:22.801 08:55:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:23.059 BaseBdev2_malloc 00:29:23.318 08:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:29:23.318 true 00:29:23.318 08:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:23.577 [2024-07-12 08:55:58.701818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:23.577 [2024-07-12 08:55:58.702201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.577 [2024-07-12 08:55:58.702283] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:23.577 [2024-07-12 08:55:58.702408] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.577 [2024-07-12 08:55:58.704955] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.577 [2024-07-12 08:55:58.705141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:23.577 BaseBdev2 00:29:23.577 08:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:23.577 08:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:23.836 BaseBdev3_malloc 00:29:23.836 08:55:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:29:24.095 true 00:29:24.095 08:55:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:24.354 [2024-07-12 08:55:59.435561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:24.354 [2024-07-12 08:55:59.435960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.354 [2024-07-12 08:55:59.436037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:24.354 [2024-07-12 08:55:59.436156] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.354 [2024-07-12 08:55:59.438763] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.354 [2024-07-12 08:55:59.438954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:24.354 BaseBdev3 00:29:24.354 08:55:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:24.354 08:55:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:24.612 BaseBdev4_malloc 00:29:24.612 08:55:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:29:24.871 true 00:29:24.871 08:55:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:29:25.130 [2024-07-12 08:56:00.127358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:29:25.130 [2024-07-12 08:56:00.127737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:25.130 [2024-07-12 08:56:00.127829] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:25.130 [2024-07-12 08:56:00.127951] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:25.130 [2024-07-12 08:56:00.130437] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:25.130 [2024-07-12 08:56:00.130627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:25.130 BaseBdev4 00:29:25.130 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:29:25.390 [2024-07-12 08:56:00.379644] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:25.390 [2024-07-12 08:56:00.381995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:25.390 [2024-07-12 08:56:00.382241] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:25.390 [2024-07-12 08:56:00.382460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:25.390 [2024-07-12 08:56:00.382871] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:29:25.390 [2024-07-12 08:56:00.383011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:25.390 [2024-07-12 08:56:00.383211] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:25.390 [2024-07-12 08:56:00.383764] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:29:25.390 [2024-07-12 08:56:00.383896] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:29:25.390 [2024-07-12 08:56:00.384207] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.390 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.649 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:25.649 "name": "raid_bdev1", 00:29:25.649 "uuid": "e87413fb-893c-49ed-9d15-3dc86f9a3174", 00:29:25.649 "strip_size_kb": 0, 00:29:25.649 "state": "online", 00:29:25.649 "raid_level": "raid1", 00:29:25.649 "superblock": true, 00:29:25.649 "num_base_bdevs": 4, 00:29:25.649 "num_base_bdevs_discovered": 4, 00:29:25.649 "num_base_bdevs_operational": 4, 00:29:25.649 "base_bdevs_list": [ 00:29:25.649 { 00:29:25.649 "name": "BaseBdev1", 00:29:25.649 "uuid": "178938cd-d526-55d4-a957-01e10af1ce6b", 00:29:25.649 "is_configured": true, 00:29:25.649 "data_offset": 2048, 00:29:25.649 "data_size": 63488 00:29:25.649 }, 00:29:25.649 { 00:29:25.649 "name": "BaseBdev2", 00:29:25.649 "uuid": "24f31dc8-07ca-5c62-8f20-a0f3e2b026b8", 00:29:25.649 "is_configured": true, 00:29:25.649 "data_offset": 2048, 00:29:25.649 "data_size": 63488 00:29:25.649 }, 00:29:25.649 { 00:29:25.649 "name": "BaseBdev3", 00:29:25.649 "uuid": "27eb142d-282b-54d2-856f-ca492eddd1cd", 00:29:25.649 "is_configured": true, 00:29:25.649 "data_offset": 2048, 00:29:25.649 "data_size": 63488 00:29:25.649 }, 00:29:25.649 { 00:29:25.649 "name": "BaseBdev4", 00:29:25.649 "uuid": "e3d4f936-72ab-5825-9dec-d50c2152e6de", 00:29:25.649 "is_configured": true, 00:29:25.649 "data_offset": 2048, 00:29:25.649 "data_size": 63488 00:29:25.649 } 00:29:25.649 ] 00:29:25.649 }' 00:29:25.649 08:56:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:25.649 08:56:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:26.218 08:56:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:29:26.218 08:56:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:26.492 [2024-07-12 08:56:01.461671] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.437 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.005 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.005 "name": "raid_bdev1", 00:29:28.005 "uuid": "e87413fb-893c-49ed-9d15-3dc86f9a3174", 00:29:28.005 "strip_size_kb": 0, 00:29:28.005 "state": "online", 00:29:28.005 "raid_level": "raid1", 00:29:28.005 "superblock": true, 00:29:28.005 "num_base_bdevs": 4, 00:29:28.005 "num_base_bdevs_discovered": 4, 00:29:28.005 "num_base_bdevs_operational": 4, 00:29:28.005 "base_bdevs_list": [ 00:29:28.005 { 00:29:28.005 "name": "BaseBdev1", 00:29:28.005 "uuid": "178938cd-d526-55d4-a957-01e10af1ce6b", 00:29:28.005 "is_configured": true, 00:29:28.005 "data_offset": 2048, 00:29:28.005 "data_size": 63488 00:29:28.005 }, 00:29:28.005 { 00:29:28.005 "name": "BaseBdev2", 00:29:28.005 "uuid": "24f31dc8-07ca-5c62-8f20-a0f3e2b026b8", 00:29:28.005 "is_configured": true, 00:29:28.005 "data_offset": 2048, 00:29:28.005 "data_size": 63488 00:29:28.005 }, 00:29:28.005 { 00:29:28.005 "name": "BaseBdev3", 00:29:28.005 "uuid": "27eb142d-282b-54d2-856f-ca492eddd1cd", 00:29:28.005 "is_configured": true, 00:29:28.005 "data_offset": 2048, 00:29:28.005 "data_size": 63488 00:29:28.005 }, 00:29:28.005 { 00:29:28.005 "name": "BaseBdev4", 00:29:28.005 "uuid": "e3d4f936-72ab-5825-9dec-d50c2152e6de", 00:29:28.005 "is_configured": true, 00:29:28.005 "data_offset": 2048, 00:29:28.005 "data_size": 63488 00:29:28.005 } 00:29:28.005 ] 00:29:28.005 }' 00:29:28.005 08:56:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.005 08:56:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.572 08:56:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:28.830 [2024-07-12 08:56:03.844722] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:28.830 [2024-07-12 08:56:03.845050] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:28.830 [2024-07-12 08:56:03.847848] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:28.830 [2024-07-12 08:56:03.848029] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.830 [2024-07-12 08:56:03.848193] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:28.831 [2024-07-12 08:56:03.848371] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:29:28.831 0 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 145750 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 145750 ']' 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 145750 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145750 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145750' 00:29:28.831 killing process with pid 145750 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 145750 00:29:28.831 08:56:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 145750 00:29:28.831 [2024-07-12 08:56:03.888723] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:29.090 [2024-07-12 08:56:04.143494] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.TXCegNd0UU 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:29:30.467 ************************************ 00:29:30.467 END TEST raid_read_error_test 00:29:30.467 ************************************ 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:30.467 00:29:30.467 real 0m9.020s 00:29:30.467 user 0m14.118s 00:29:30.467 sys 0m1.038s 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:30.467 08:56:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.467 08:56:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:30.467 08:56:05 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:29:30.467 08:56:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:29:30.467 08:56:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.467 08:56:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:30.467 ************************************ 00:29:30.467 START TEST raid_write_error_test 00:29:30.467 ************************************ 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.fDA4OpFrfC 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=145978 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 145978 /var/tmp/spdk-raid.sock 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 145978 ']' 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:30.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.467 08:56:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.467 [2024-07-12 08:56:05.397067] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:29:30.467 [2024-07-12 08:56:05.397613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145978 ] 00:29:30.467 [2024-07-12 08:56:05.568269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.725 [2024-07-12 08:56:05.767510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.983 [2024-07-12 08:56:05.949660] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:31.242 08:56:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.242 08:56:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:29:31.242 08:56:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:31.242 08:56:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:31.500 BaseBdev1_malloc 00:29:31.500 08:56:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:29:31.759 true 00:29:31.759 08:56:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:32.018 [2024-07-12 08:56:07.064989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:32.018 [2024-07-12 08:56:07.065407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.018 [2024-07-12 08:56:07.065523] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:32.018 [2024-07-12 08:56:07.065649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.018 [2024-07-12 08:56:07.068451] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.018 [2024-07-12 08:56:07.068633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:32.018 BaseBdev1 00:29:32.018 08:56:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:32.018 08:56:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:32.277 BaseBdev2_malloc 00:29:32.277 08:56:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:29:32.535 true 00:29:32.535 08:56:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:32.794 [2024-07-12 08:56:07.802368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:32.794 [2024-07-12 08:56:07.802786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.794 [2024-07-12 08:56:07.802868] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:32.794 [2024-07-12 08:56:07.802995] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.794 [2024-07-12 08:56:07.805565] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.794 [2024-07-12 08:56:07.805766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:32.794 BaseBdev2 00:29:32.794 08:56:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:32.794 08:56:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:33.052 BaseBdev3_malloc 00:29:33.052 08:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:29:33.310 true 00:29:33.310 08:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:33.568 [2024-07-12 08:56:08.534386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:33.568 [2024-07-12 08:56:08.534795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.568 [2024-07-12 08:56:08.534874] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:33.568 [2024-07-12 08:56:08.534994] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.568 [2024-07-12 08:56:08.537617] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.568 [2024-07-12 08:56:08.537818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:33.568 BaseBdev3 00:29:33.568 08:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:29:33.568 08:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:33.827 BaseBdev4_malloc 00:29:33.827 08:56:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:29:33.827 true 00:29:34.086 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:29:34.086 [2024-07-12 08:56:09.235878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:29:34.086 [2024-07-12 08:56:09.236261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.086 [2024-07-12 08:56:09.236372] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:34.086 [2024-07-12 08:56:09.236509] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.086 [2024-07-12 08:56:09.238991] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.086 [2024-07-12 08:56:09.239179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:34.086 BaseBdev4 00:29:34.086 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:29:34.345 [2024-07-12 08:56:09.456168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:34.345 [2024-07-12 08:56:09.458558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:34.345 [2024-07-12 08:56:09.458807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:34.345 [2024-07-12 08:56:09.459002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:34.345 [2024-07-12 08:56:09.459429] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:29:34.345 [2024-07-12 08:56:09.459560] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:34.345 [2024-07-12 08:56:09.459746] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:34.345 [2024-07-12 08:56:09.460292] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:29:34.345 [2024-07-12 08:56:09.460430] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:29:34.345 [2024-07-12 08:56:09.460779] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.345 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.604 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:34.604 "name": "raid_bdev1", 00:29:34.604 "uuid": "f9dba364-85d7-4e8e-bd17-ba20b79173f0", 00:29:34.604 "strip_size_kb": 0, 00:29:34.604 "state": "online", 00:29:34.604 "raid_level": "raid1", 00:29:34.604 "superblock": true, 00:29:34.604 "num_base_bdevs": 4, 00:29:34.604 "num_base_bdevs_discovered": 4, 00:29:34.604 "num_base_bdevs_operational": 4, 00:29:34.604 "base_bdevs_list": [ 00:29:34.604 { 00:29:34.604 "name": "BaseBdev1", 00:29:34.604 "uuid": "4e214036-f634-56bb-a688-59f96c865911", 00:29:34.604 "is_configured": true, 00:29:34.604 "data_offset": 2048, 00:29:34.604 "data_size": 63488 00:29:34.604 }, 00:29:34.604 { 00:29:34.604 "name": "BaseBdev2", 00:29:34.604 "uuid": "3999cdfb-8de6-56df-bbd9-58d0f70eaf75", 00:29:34.604 "is_configured": true, 00:29:34.604 "data_offset": 2048, 00:29:34.604 "data_size": 63488 00:29:34.604 }, 00:29:34.604 { 00:29:34.604 "name": "BaseBdev3", 00:29:34.604 "uuid": "4db46a35-94c8-548f-87b2-421da8d947b3", 00:29:34.604 "is_configured": true, 00:29:34.604 "data_offset": 2048, 00:29:34.604 "data_size": 63488 00:29:34.604 }, 00:29:34.604 { 00:29:34.604 "name": "BaseBdev4", 00:29:34.604 "uuid": "e4470a41-06ec-5791-9fba-621223b4ef5b", 00:29:34.604 "is_configured": true, 00:29:34.604 "data_offset": 2048, 00:29:34.604 "data_size": 63488 00:29:34.604 } 00:29:34.604 ] 00:29:34.604 }' 00:29:34.604 08:56:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:34.604 08:56:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.540 08:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:29:35.540 08:56:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:35.540 [2024-07-12 08:56:10.542188] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:36.477 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:29:36.736 [2024-07-12 08:56:11.709998] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:29:36.736 [2024-07-12 08:56:11.710393] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:36.736 [2024-07-12 08:56:11.710715] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.736 08:56:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.994 08:56:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:36.994 "name": "raid_bdev1", 00:29:36.994 "uuid": "f9dba364-85d7-4e8e-bd17-ba20b79173f0", 00:29:36.994 "strip_size_kb": 0, 00:29:36.994 "state": "online", 00:29:36.994 "raid_level": "raid1", 00:29:36.994 "superblock": true, 00:29:36.994 "num_base_bdevs": 4, 00:29:36.994 "num_base_bdevs_discovered": 3, 00:29:36.994 "num_base_bdevs_operational": 3, 00:29:36.994 "base_bdevs_list": [ 00:29:36.994 { 00:29:36.994 "name": null, 00:29:36.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:36.994 "is_configured": false, 00:29:36.994 "data_offset": 2048, 00:29:36.994 "data_size": 63488 00:29:36.994 }, 00:29:36.994 { 00:29:36.994 "name": "BaseBdev2", 00:29:36.994 "uuid": "3999cdfb-8de6-56df-bbd9-58d0f70eaf75", 00:29:36.994 "is_configured": true, 00:29:36.994 "data_offset": 2048, 00:29:36.994 "data_size": 63488 00:29:36.994 }, 00:29:36.994 { 00:29:36.994 "name": "BaseBdev3", 00:29:36.994 "uuid": "4db46a35-94c8-548f-87b2-421da8d947b3", 00:29:36.994 "is_configured": true, 00:29:36.994 "data_offset": 2048, 00:29:36.994 "data_size": 63488 00:29:36.994 }, 00:29:36.994 { 00:29:36.994 "name": "BaseBdev4", 00:29:36.994 "uuid": "e4470a41-06ec-5791-9fba-621223b4ef5b", 00:29:36.994 "is_configured": true, 00:29:36.994 "data_offset": 2048, 00:29:36.994 "data_size": 63488 00:29:36.994 } 00:29:36.994 ] 00:29:36.994 }' 00:29:36.994 08:56:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:36.994 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.561 08:56:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:37.820 [2024-07-12 08:56:12.892441] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:37.820 [2024-07-12 08:56:12.892748] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:37.820 [2024-07-12 08:56:12.895616] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:37.820 [2024-07-12 08:56:12.895806] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:37.820 [2024-07-12 08:56:12.895947] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:37.820 [2024-07-12 08:56:12.896186] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:29:37.820 0 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 145978 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 145978 ']' 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 145978 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145978 00:29:37.820 killing process with pid 145978 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145978' 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 145978 00:29:37.820 08:56:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 145978 00:29:37.820 [2024-07-12 08:56:12.939037] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:38.078 [2024-07-12 08:56:13.191100] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.fDA4OpFrfC 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:29:39.487 ************************************ 00:29:39.487 END TEST raid_write_error_test 00:29:39.487 ************************************ 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:39.487 00:29:39.487 real 0m8.961s 00:29:39.487 user 0m14.065s 00:29:39.487 sys 0m0.944s 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.487 08:56:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.487 08:56:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:39.487 08:56:14 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:29:39.487 08:56:14 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:29:39.487 08:56:14 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:29:39.487 08:56:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:39.487 08:56:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.487 08:56:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:39.487 ************************************ 00:29:39.487 START TEST raid_rebuild_test 00:29:39.487 ************************************ 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.487 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=146214 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 146214 /var/tmp/spdk-raid.sock 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 146214 ']' 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:39.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.488 08:56:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.488 [2024-07-12 08:56:14.413703] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:29:39.488 [2024-07-12 08:56:14.414878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146214 ] 00:29:39.488 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:39.488 Zero copy mechanism will not be used. 00:29:39.488 [2024-07-12 08:56:14.588145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.746 [2024-07-12 08:56:14.787803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.005 [2024-07-12 08:56:14.969446] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:40.264 08:56:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.264 08:56:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:29:40.264 08:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:40.264 08:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:40.522 BaseBdev1_malloc 00:29:40.522 08:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:40.781 [2024-07-12 08:56:15.872858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:40.781 [2024-07-12 08:56:15.873289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.781 [2024-07-12 08:56:15.873370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:40.781 [2024-07-12 08:56:15.873506] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.781 [2024-07-12 08:56:15.876015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.781 [2024-07-12 08:56:15.876187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:40.781 BaseBdev1 00:29:40.781 08:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:40.781 08:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:41.040 BaseBdev2_malloc 00:29:41.040 08:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:41.300 [2024-07-12 08:56:16.428070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:41.300 [2024-07-12 08:56:16.428487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.300 [2024-07-12 08:56:16.428569] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:29:41.300 [2024-07-12 08:56:16.428891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.300 [2024-07-12 08:56:16.431293] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.300 [2024-07-12 08:56:16.431490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:41.300 BaseBdev2 00:29:41.300 08:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:41.559 spare_malloc 00:29:41.559 08:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:41.818 spare_delay 00:29:41.818 08:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:42.076 [2024-07-12 08:56:17.220585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:42.077 [2024-07-12 08:56:17.221021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.077 [2024-07-12 08:56:17.221100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:42.077 [2024-07-12 08:56:17.221231] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.077 [2024-07-12 08:56:17.223703] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.077 [2024-07-12 08:56:17.223906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:42.077 spare 00:29:42.077 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:42.336 [2024-07-12 08:56:17.444743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:42.336 [2024-07-12 08:56:17.447053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:42.336 [2024-07-12 08:56:17.447330] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:29:42.336 [2024-07-12 08:56:17.447444] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:42.336 [2024-07-12 08:56:17.447639] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:29:42.336 [2024-07-12 08:56:17.448231] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:29:42.336 [2024-07-12 08:56:17.448429] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:29:42.336 [2024-07-12 08:56:17.448794] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.336 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.595 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:42.595 "name": "raid_bdev1", 00:29:42.595 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:42.595 "strip_size_kb": 0, 00:29:42.595 "state": "online", 00:29:42.595 "raid_level": "raid1", 00:29:42.595 "superblock": false, 00:29:42.595 "num_base_bdevs": 2, 00:29:42.595 "num_base_bdevs_discovered": 2, 00:29:42.595 "num_base_bdevs_operational": 2, 00:29:42.595 "base_bdevs_list": [ 00:29:42.595 { 00:29:42.595 "name": "BaseBdev1", 00:29:42.595 "uuid": "29cb7bce-e975-5455-a398-2a3fc1914c8f", 00:29:42.595 "is_configured": true, 00:29:42.595 "data_offset": 0, 00:29:42.595 "data_size": 65536 00:29:42.595 }, 00:29:42.595 { 00:29:42.595 "name": "BaseBdev2", 00:29:42.595 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:42.595 "is_configured": true, 00:29:42.595 "data_offset": 0, 00:29:42.595 "data_size": 65536 00:29:42.595 } 00:29:42.595 ] 00:29:42.595 }' 00:29:42.595 08:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:42.595 08:56:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.532 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:43.532 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:43.532 [2024-07-12 08:56:18.633258] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:43.532 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:43.532 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.532 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:43.791 08:56:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:44.050 [2024-07-12 08:56:19.121180] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:44.050 /dev/nbd0 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.050 1+0 records in 00:29:44.050 1+0 records out 00:29:44.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466419 s, 8.8 MB/s 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:44.050 08:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:50.613 65536+0 records in 00:29:50.613 65536+0 records out 00:29:50.613 33554432 bytes (34 MB, 32 MiB) copied, 5.64038 s, 5.9 MB/s 00:29:50.613 08:56:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:50.613 08:56:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.613 08:56:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:50.613 08:56:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.613 08:56:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:50.613 08:56:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.613 08:56:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:50.613 [2024-07-12 08:56:25.103001] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:50.613 [2024-07-12 08:56:25.414684] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:50.613 "name": "raid_bdev1", 00:29:50.613 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:50.613 "strip_size_kb": 0, 00:29:50.613 "state": "online", 00:29:50.613 "raid_level": "raid1", 00:29:50.613 "superblock": false, 00:29:50.613 "num_base_bdevs": 2, 00:29:50.613 "num_base_bdevs_discovered": 1, 00:29:50.613 "num_base_bdevs_operational": 1, 00:29:50.613 "base_bdevs_list": [ 00:29:50.613 { 00:29:50.613 "name": null, 00:29:50.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.613 "is_configured": false, 00:29:50.613 "data_offset": 0, 00:29:50.613 "data_size": 65536 00:29:50.613 }, 00:29:50.613 { 00:29:50.613 "name": "BaseBdev2", 00:29:50.613 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:50.613 "is_configured": true, 00:29:50.613 "data_offset": 0, 00:29:50.613 "data_size": 65536 00:29:50.613 } 00:29:50.613 ] 00:29:50.613 }' 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:50.613 08:56:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.181 08:56:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:51.440 [2024-07-12 08:56:26.614996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:51.440 [2024-07-12 08:56:26.629030] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b360 00:29:51.440 [2024-07-12 08:56:26.631196] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:51.699 08:56:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:52.636 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:52.636 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:52.636 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:52.636 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:52.636 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:52.636 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.636 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.897 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:52.897 "name": "raid_bdev1", 00:29:52.897 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:52.897 "strip_size_kb": 0, 00:29:52.897 "state": "online", 00:29:52.897 "raid_level": "raid1", 00:29:52.897 "superblock": false, 00:29:52.897 "num_base_bdevs": 2, 00:29:52.897 "num_base_bdevs_discovered": 2, 00:29:52.897 "num_base_bdevs_operational": 2, 00:29:52.897 "process": { 00:29:52.897 "type": "rebuild", 00:29:52.897 "target": "spare", 00:29:52.897 "progress": { 00:29:52.897 "blocks": 24576, 00:29:52.897 "percent": 37 00:29:52.897 } 00:29:52.897 }, 00:29:52.897 "base_bdevs_list": [ 00:29:52.897 { 00:29:52.897 "name": "spare", 00:29:52.897 "uuid": "d793c6fa-840e-5114-abce-8d91b16b568d", 00:29:52.897 "is_configured": true, 00:29:52.897 "data_offset": 0, 00:29:52.897 "data_size": 65536 00:29:52.897 }, 00:29:52.897 { 00:29:52.897 "name": "BaseBdev2", 00:29:52.897 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:52.897 "is_configured": true, 00:29:52.897 "data_offset": 0, 00:29:52.897 "data_size": 65536 00:29:52.897 } 00:29:52.897 ] 00:29:52.897 }' 00:29:52.897 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:52.897 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:52.897 08:56:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:52.897 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:52.897 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:53.156 [2024-07-12 08:56:28.288929] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:53.156 [2024-07-12 08:56:28.342745] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:53.156 [2024-07-12 08:56:28.342887] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.156 [2024-07-12 08:56:28.342908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:53.156 [2024-07-12 08:56:28.342917] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.414 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.673 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.673 "name": "raid_bdev1", 00:29:53.673 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:53.673 "strip_size_kb": 0, 00:29:53.673 "state": "online", 00:29:53.673 "raid_level": "raid1", 00:29:53.673 "superblock": false, 00:29:53.673 "num_base_bdevs": 2, 00:29:53.673 "num_base_bdevs_discovered": 1, 00:29:53.673 "num_base_bdevs_operational": 1, 00:29:53.673 "base_bdevs_list": [ 00:29:53.673 { 00:29:53.673 "name": null, 00:29:53.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.673 "is_configured": false, 00:29:53.673 "data_offset": 0, 00:29:53.673 "data_size": 65536 00:29:53.673 }, 00:29:53.673 { 00:29:53.673 "name": "BaseBdev2", 00:29:53.673 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:53.673 "is_configured": true, 00:29:53.673 "data_offset": 0, 00:29:53.673 "data_size": 65536 00:29:53.673 } 00:29:53.673 ] 00:29:53.673 }' 00:29:53.673 08:56:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.673 08:56:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.241 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:54.241 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:54.241 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:54.241 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:54.241 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:54.241 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.241 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.500 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:54.500 "name": "raid_bdev1", 00:29:54.500 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:54.500 "strip_size_kb": 0, 00:29:54.500 "state": "online", 00:29:54.500 "raid_level": "raid1", 00:29:54.500 "superblock": false, 00:29:54.500 "num_base_bdevs": 2, 00:29:54.500 "num_base_bdevs_discovered": 1, 00:29:54.500 "num_base_bdevs_operational": 1, 00:29:54.500 "base_bdevs_list": [ 00:29:54.500 { 00:29:54.500 "name": null, 00:29:54.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.500 "is_configured": false, 00:29:54.500 "data_offset": 0, 00:29:54.500 "data_size": 65536 00:29:54.500 }, 00:29:54.500 { 00:29:54.500 "name": "BaseBdev2", 00:29:54.500 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:54.500 "is_configured": true, 00:29:54.500 "data_offset": 0, 00:29:54.500 "data_size": 65536 00:29:54.500 } 00:29:54.500 ] 00:29:54.500 }' 00:29:54.500 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:54.500 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:54.500 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.500 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:54.500 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:54.759 [2024-07-12 08:56:29.866759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:54.759 [2024-07-12 08:56:29.879800] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:29:54.759 [2024-07-12 08:56:29.881926] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:54.759 08:56:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:56.136 08:56:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:56.136 08:56:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.136 08:56:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:56.136 08:56:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:56.136 08:56:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.136 08:56:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.136 08:56:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.136 "name": "raid_bdev1", 00:29:56.136 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:56.136 "strip_size_kb": 0, 00:29:56.136 "state": "online", 00:29:56.136 "raid_level": "raid1", 00:29:56.136 "superblock": false, 00:29:56.136 "num_base_bdevs": 2, 00:29:56.136 "num_base_bdevs_discovered": 2, 00:29:56.136 "num_base_bdevs_operational": 2, 00:29:56.136 "process": { 00:29:56.136 "type": "rebuild", 00:29:56.136 "target": "spare", 00:29:56.136 "progress": { 00:29:56.136 "blocks": 24576, 00:29:56.136 "percent": 37 00:29:56.136 } 00:29:56.136 }, 00:29:56.136 "base_bdevs_list": [ 00:29:56.136 { 00:29:56.136 "name": "spare", 00:29:56.136 "uuid": "d793c6fa-840e-5114-abce-8d91b16b568d", 00:29:56.136 "is_configured": true, 00:29:56.136 "data_offset": 0, 00:29:56.136 "data_size": 65536 00:29:56.136 }, 00:29:56.136 { 00:29:56.136 "name": "BaseBdev2", 00:29:56.136 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:56.136 "is_configured": true, 00:29:56.136 "data_offset": 0, 00:29:56.136 "data_size": 65536 00:29:56.136 } 00:29:56.136 ] 00:29:56.136 }' 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=890 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.136 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.395 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.395 "name": "raid_bdev1", 00:29:56.395 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:56.395 "strip_size_kb": 0, 00:29:56.395 "state": "online", 00:29:56.395 "raid_level": "raid1", 00:29:56.395 "superblock": false, 00:29:56.395 "num_base_bdevs": 2, 00:29:56.395 "num_base_bdevs_discovered": 2, 00:29:56.395 "num_base_bdevs_operational": 2, 00:29:56.395 "process": { 00:29:56.395 "type": "rebuild", 00:29:56.395 "target": "spare", 00:29:56.395 "progress": { 00:29:56.395 "blocks": 30720, 00:29:56.395 "percent": 46 00:29:56.395 } 00:29:56.395 }, 00:29:56.395 "base_bdevs_list": [ 00:29:56.395 { 00:29:56.395 "name": "spare", 00:29:56.395 "uuid": "d793c6fa-840e-5114-abce-8d91b16b568d", 00:29:56.395 "is_configured": true, 00:29:56.395 "data_offset": 0, 00:29:56.395 "data_size": 65536 00:29:56.395 }, 00:29:56.395 { 00:29:56.395 "name": "BaseBdev2", 00:29:56.395 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:56.395 "is_configured": true, 00:29:56.395 "data_offset": 0, 00:29:56.395 "data_size": 65536 00:29:56.395 } 00:29:56.395 ] 00:29:56.395 }' 00:29:56.395 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.395 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:56.395 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.654 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:56.654 08:56:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.589 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.847 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:57.847 "name": "raid_bdev1", 00:29:57.847 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:57.847 "strip_size_kb": 0, 00:29:57.847 "state": "online", 00:29:57.847 "raid_level": "raid1", 00:29:57.847 "superblock": false, 00:29:57.847 "num_base_bdevs": 2, 00:29:57.847 "num_base_bdevs_discovered": 2, 00:29:57.847 "num_base_bdevs_operational": 2, 00:29:57.847 "process": { 00:29:57.847 "type": "rebuild", 00:29:57.847 "target": "spare", 00:29:57.847 "progress": { 00:29:57.847 "blocks": 59392, 00:29:57.847 "percent": 90 00:29:57.847 } 00:29:57.847 }, 00:29:57.847 "base_bdevs_list": [ 00:29:57.847 { 00:29:57.847 "name": "spare", 00:29:57.847 "uuid": "d793c6fa-840e-5114-abce-8d91b16b568d", 00:29:57.847 "is_configured": true, 00:29:57.847 "data_offset": 0, 00:29:57.847 "data_size": 65536 00:29:57.847 }, 00:29:57.847 { 00:29:57.847 "name": "BaseBdev2", 00:29:57.847 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:57.847 "is_configured": true, 00:29:57.847 "data_offset": 0, 00:29:57.847 "data_size": 65536 00:29:57.847 } 00:29:57.847 ] 00:29:57.847 }' 00:29:57.847 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:57.847 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.847 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:57.847 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.847 08:56:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:58.104 [2024-07-12 08:56:33.103139] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:58.104 [2024-07-12 08:56:33.103263] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:58.104 [2024-07-12 08:56:33.103364] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:59.037 08:56:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:59.037 08:56:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:59.037 08:56:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:59.037 08:56:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:59.037 08:56:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:59.037 08:56:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:59.037 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.037 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.295 "name": "raid_bdev1", 00:29:59.295 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:59.295 "strip_size_kb": 0, 00:29:59.295 "state": "online", 00:29:59.295 "raid_level": "raid1", 00:29:59.295 "superblock": false, 00:29:59.295 "num_base_bdevs": 2, 00:29:59.295 "num_base_bdevs_discovered": 2, 00:29:59.295 "num_base_bdevs_operational": 2, 00:29:59.295 "base_bdevs_list": [ 00:29:59.295 { 00:29:59.295 "name": "spare", 00:29:59.295 "uuid": "d793c6fa-840e-5114-abce-8d91b16b568d", 00:29:59.295 "is_configured": true, 00:29:59.295 "data_offset": 0, 00:29:59.295 "data_size": 65536 00:29:59.295 }, 00:29:59.295 { 00:29:59.295 "name": "BaseBdev2", 00:29:59.295 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:59.295 "is_configured": true, 00:29:59.295 "data_offset": 0, 00:29:59.295 "data_size": 65536 00:29:59.295 } 00:29:59.295 ] 00:29:59.295 }' 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.295 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.553 "name": "raid_bdev1", 00:29:59.553 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:59.553 "strip_size_kb": 0, 00:29:59.553 "state": "online", 00:29:59.553 "raid_level": "raid1", 00:29:59.553 "superblock": false, 00:29:59.553 "num_base_bdevs": 2, 00:29:59.553 "num_base_bdevs_discovered": 2, 00:29:59.553 "num_base_bdevs_operational": 2, 00:29:59.553 "base_bdevs_list": [ 00:29:59.553 { 00:29:59.553 "name": "spare", 00:29:59.553 "uuid": "d793c6fa-840e-5114-abce-8d91b16b568d", 00:29:59.553 "is_configured": true, 00:29:59.553 "data_offset": 0, 00:29:59.553 "data_size": 65536 00:29:59.553 }, 00:29:59.553 { 00:29:59.553 "name": "BaseBdev2", 00:29:59.553 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:59.553 "is_configured": true, 00:29:59.553 "data_offset": 0, 00:29:59.553 "data_size": 65536 00:29:59.553 } 00:29:59.553 ] 00:29:59.553 }' 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:59.553 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.554 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.811 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:59.812 "name": "raid_bdev1", 00:29:59.812 "uuid": "e297cbe3-b9be-409b-aba2-148925cf89f2", 00:29:59.812 "strip_size_kb": 0, 00:29:59.812 "state": "online", 00:29:59.812 "raid_level": "raid1", 00:29:59.812 "superblock": false, 00:29:59.812 "num_base_bdevs": 2, 00:29:59.812 "num_base_bdevs_discovered": 2, 00:29:59.812 "num_base_bdevs_operational": 2, 00:29:59.812 "base_bdevs_list": [ 00:29:59.812 { 00:29:59.812 "name": "spare", 00:29:59.812 "uuid": "d793c6fa-840e-5114-abce-8d91b16b568d", 00:29:59.812 "is_configured": true, 00:29:59.812 "data_offset": 0, 00:29:59.812 "data_size": 65536 00:29:59.812 }, 00:29:59.812 { 00:29:59.812 "name": "BaseBdev2", 00:29:59.812 "uuid": "1298bb2e-df45-59fa-8e7a-8b04ce7b82b4", 00:29:59.812 "is_configured": true, 00:29:59.812 "data_offset": 0, 00:29:59.812 "data_size": 65536 00:29:59.812 } 00:29:59.812 ] 00:29:59.812 }' 00:29:59.812 08:56:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:59.812 08:56:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.747 08:56:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:01.018 [2024-07-12 08:56:35.982368] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:01.018 [2024-07-12 08:56:35.982419] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:01.018 [2024-07-12 08:56:35.982569] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:01.018 [2024-07-12 08:56:35.982652] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:01.018 [2024-07-12 08:56:35.982665] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:30:01.018 08:56:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.018 08:56:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.311 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:01.577 /dev/nbd0 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.577 1+0 records in 00:30:01.577 1+0 records out 00:30:01.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435354 s, 9.4 MB/s 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.577 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:01.837 /dev/nbd1 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.837 1+0 records in 00:30:01.837 1+0 records out 00:30:01.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374064 s, 10.9 MB/s 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.837 08:56:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:02.096 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:02.096 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:02.096 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:02.096 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.096 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.096 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:02.096 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:02.355 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:02.355 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.355 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:02.355 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:02.355 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.355 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.355 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 146214 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 146214 ']' 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 146214 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146214 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:02.614 killing process with pid 146214 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146214' 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 146214 00:30:02.614 Received shutdown signal, test time was about 60.000000 seconds 00:30:02.614 00:30:02.614 Latency(us) 00:30:02.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.614 =================================================================================================================== 00:30:02.614 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:02.614 08:56:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 146214 00:30:02.614 [2024-07-12 08:56:37.763281] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:02.873 [2024-07-12 08:56:37.993024] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:30:04.250 00:30:04.250 real 0m24.703s 00:30:04.250 user 0m34.474s 00:30:04.250 sys 0m4.179s 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.250 ************************************ 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:04.250 END TEST raid_rebuild_test 00:30:04.250 ************************************ 00:30:04.250 08:56:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:04.250 08:56:39 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:30:04.250 08:56:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:04.250 08:56:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.250 08:56:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:04.250 ************************************ 00:30:04.250 START TEST raid_rebuild_test_sb 00:30:04.250 ************************************ 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=146818 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 146818 /var/tmp/spdk-raid.sock 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 146818 ']' 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:04.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.250 08:56:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.250 [2024-07-12 08:56:39.170843] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:30:04.250 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:04.250 Zero copy mechanism will not be used. 00:30:04.250 [2024-07-12 08:56:39.171047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146818 ] 00:30:04.250 [2024-07-12 08:56:39.334078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.509 [2024-07-12 08:56:39.532332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.768 [2024-07-12 08:56:39.710989] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:05.027 08:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.027 08:56:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:30:05.027 08:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:05.027 08:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:05.286 BaseBdev1_malloc 00:30:05.286 08:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:05.545 [2024-07-12 08:56:40.647883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:05.545 [2024-07-12 08:56:40.648028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:05.545 [2024-07-12 08:56:40.648070] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:30:05.545 [2024-07-12 08:56:40.648091] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:05.545 [2024-07-12 08:56:40.650679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:05.545 [2024-07-12 08:56:40.650747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:05.545 BaseBdev1 00:30:05.545 08:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:05.545 08:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:05.805 BaseBdev2_malloc 00:30:05.805 08:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:06.063 [2024-07-12 08:56:41.208326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:06.063 [2024-07-12 08:56:41.208508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.063 [2024-07-12 08:56:41.208570] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:30:06.063 [2024-07-12 08:56:41.208593] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.063 [2024-07-12 08:56:41.211083] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.063 [2024-07-12 08:56:41.211151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:06.063 BaseBdev2 00:30:06.063 08:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:06.321 spare_malloc 00:30:06.321 08:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:06.580 spare_delay 00:30:06.580 08:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:06.838 [2024-07-12 08:56:41.936363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:06.838 [2024-07-12 08:56:41.936520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.838 [2024-07-12 08:56:41.936565] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:30:06.838 [2024-07-12 08:56:41.936607] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.838 [2024-07-12 08:56:41.939246] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.838 [2024-07-12 08:56:41.939300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:06.838 spare 00:30:06.838 08:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:07.097 [2024-07-12 08:56:42.160451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:07.097 [2024-07-12 08:56:42.162607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:07.097 [2024-07-12 08:56:42.162887] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:30:07.097 [2024-07-12 08:56:42.162914] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:07.097 [2024-07-12 08:56:42.163062] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:30:07.097 [2024-07-12 08:56:42.163522] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:30:07.097 [2024-07-12 08:56:42.163547] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:30:07.097 [2024-07-12 08:56:42.163729] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.097 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.357 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.357 "name": "raid_bdev1", 00:30:07.357 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:07.357 "strip_size_kb": 0, 00:30:07.357 "state": "online", 00:30:07.357 "raid_level": "raid1", 00:30:07.357 "superblock": true, 00:30:07.357 "num_base_bdevs": 2, 00:30:07.357 "num_base_bdevs_discovered": 2, 00:30:07.357 "num_base_bdevs_operational": 2, 00:30:07.357 "base_bdevs_list": [ 00:30:07.357 { 00:30:07.357 "name": "BaseBdev1", 00:30:07.357 "uuid": "b293debe-0927-5d20-b521-b04f609bb6de", 00:30:07.357 "is_configured": true, 00:30:07.357 "data_offset": 2048, 00:30:07.357 "data_size": 63488 00:30:07.357 }, 00:30:07.357 { 00:30:07.357 "name": "BaseBdev2", 00:30:07.357 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:07.357 "is_configured": true, 00:30:07.357 "data_offset": 2048, 00:30:07.357 "data_size": 63488 00:30:07.357 } 00:30:07.357 ] 00:30:07.357 }' 00:30:07.357 08:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.357 08:56:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.294 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:08.294 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:08.294 [2024-07-12 08:56:43.384943] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:08.294 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:30:08.294 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.294 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:08.552 08:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:08.809 [2024-07-12 08:56:43.968946] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:08.810 /dev/nbd0 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:09.069 1+0 records in 00:30:09.069 1+0 records out 00:30:09.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404063 s, 10.1 MB/s 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:30:09.069 08:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:14.338 63488+0 records in 00:30:14.338 63488+0 records out 00:30:14.338 32505856 bytes (33 MB, 31 MiB) copied, 5.44217 s, 6.0 MB/s 00:30:14.338 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:14.338 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:14.338 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:14.338 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:14.338 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:14.338 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.338 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:14.596 [2024-07-12 08:56:49.744440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.596 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:14.855 [2024-07-12 08:56:49.948053] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.855 08:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.114 08:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:15.114 "name": "raid_bdev1", 00:30:15.114 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:15.114 "strip_size_kb": 0, 00:30:15.114 "state": "online", 00:30:15.114 "raid_level": "raid1", 00:30:15.114 "superblock": true, 00:30:15.114 "num_base_bdevs": 2, 00:30:15.114 "num_base_bdevs_discovered": 1, 00:30:15.114 "num_base_bdevs_operational": 1, 00:30:15.114 "base_bdevs_list": [ 00:30:15.114 { 00:30:15.114 "name": null, 00:30:15.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.114 "is_configured": false, 00:30:15.114 "data_offset": 2048, 00:30:15.114 "data_size": 63488 00:30:15.114 }, 00:30:15.114 { 00:30:15.114 "name": "BaseBdev2", 00:30:15.114 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:15.114 "is_configured": true, 00:30:15.114 "data_offset": 2048, 00:30:15.114 "data_size": 63488 00:30:15.114 } 00:30:15.114 ] 00:30:15.114 }' 00:30:15.114 08:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:15.114 08:56:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.050 08:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:16.050 [2024-07-12 08:56:51.144313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:16.050 [2024-07-12 08:56:51.157882] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca50a0 00:30:16.050 [2024-07-12 08:56:51.159992] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:16.050 08:56:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:16.985 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:16.985 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:16.985 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:16.985 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:16.985 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:16.985 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.985 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.243 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:17.243 "name": "raid_bdev1", 00:30:17.243 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:17.243 "strip_size_kb": 0, 00:30:17.243 "state": "online", 00:30:17.243 "raid_level": "raid1", 00:30:17.243 "superblock": true, 00:30:17.243 "num_base_bdevs": 2, 00:30:17.243 "num_base_bdevs_discovered": 2, 00:30:17.243 "num_base_bdevs_operational": 2, 00:30:17.243 "process": { 00:30:17.243 "type": "rebuild", 00:30:17.243 "target": "spare", 00:30:17.243 "progress": { 00:30:17.243 "blocks": 24576, 00:30:17.243 "percent": 38 00:30:17.243 } 00:30:17.243 }, 00:30:17.243 "base_bdevs_list": [ 00:30:17.243 { 00:30:17.243 "name": "spare", 00:30:17.243 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:17.243 "is_configured": true, 00:30:17.243 "data_offset": 2048, 00:30:17.243 "data_size": 63488 00:30:17.243 }, 00:30:17.243 { 00:30:17.243 "name": "BaseBdev2", 00:30:17.243 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:17.243 "is_configured": true, 00:30:17.243 "data_offset": 2048, 00:30:17.243 "data_size": 63488 00:30:17.243 } 00:30:17.243 ] 00:30:17.243 }' 00:30:17.243 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:17.502 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:17.503 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:17.503 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:17.503 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:17.762 [2024-07-12 08:56:52.774116] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.762 [2024-07-12 08:56:52.871307] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:17.762 [2024-07-12 08:56:52.871430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.762 [2024-07-12 08:56:52.871451] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.762 [2024-07-12 08:56:52.871461] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.762 08:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.026 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.026 "name": "raid_bdev1", 00:30:18.026 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:18.026 "strip_size_kb": 0, 00:30:18.026 "state": "online", 00:30:18.026 "raid_level": "raid1", 00:30:18.026 "superblock": true, 00:30:18.026 "num_base_bdevs": 2, 00:30:18.026 "num_base_bdevs_discovered": 1, 00:30:18.026 "num_base_bdevs_operational": 1, 00:30:18.026 "base_bdevs_list": [ 00:30:18.026 { 00:30:18.026 "name": null, 00:30:18.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.026 "is_configured": false, 00:30:18.026 "data_offset": 2048, 00:30:18.026 "data_size": 63488 00:30:18.026 }, 00:30:18.026 { 00:30:18.026 "name": "BaseBdev2", 00:30:18.026 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:18.026 "is_configured": true, 00:30:18.026 "data_offset": 2048, 00:30:18.026 "data_size": 63488 00:30:18.026 } 00:30:18.026 ] 00:30:18.026 }' 00:30:18.026 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.026 08:56:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.961 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:18.961 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:18.961 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:18.961 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:18.961 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:18.961 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.961 08:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.961 08:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:18.961 "name": "raid_bdev1", 00:30:18.961 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:18.961 "strip_size_kb": 0, 00:30:18.961 "state": "online", 00:30:18.961 "raid_level": "raid1", 00:30:18.961 "superblock": true, 00:30:18.961 "num_base_bdevs": 2, 00:30:18.961 "num_base_bdevs_discovered": 1, 00:30:18.961 "num_base_bdevs_operational": 1, 00:30:18.961 "base_bdevs_list": [ 00:30:18.961 { 00:30:18.961 "name": null, 00:30:18.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.961 "is_configured": false, 00:30:18.961 "data_offset": 2048, 00:30:18.961 "data_size": 63488 00:30:18.961 }, 00:30:18.961 { 00:30:18.961 "name": "BaseBdev2", 00:30:18.961 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:18.961 "is_configured": true, 00:30:18.961 "data_offset": 2048, 00:30:18.961 "data_size": 63488 00:30:18.961 } 00:30:18.961 ] 00:30:18.961 }' 00:30:18.961 08:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:19.220 08:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:19.220 08:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:19.220 08:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:19.220 08:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:19.479 [2024-07-12 08:56:54.510516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:19.479 [2024-07-12 08:56:54.523427] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5240 00:30:19.479 [2024-07-12 08:56:54.525569] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:19.479 08:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:20.416 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.416 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:20.416 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:20.416 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:20.416 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:20.417 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.417 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.677 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.677 "name": "raid_bdev1", 00:30:20.677 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:20.677 "strip_size_kb": 0, 00:30:20.677 "state": "online", 00:30:20.677 "raid_level": "raid1", 00:30:20.677 "superblock": true, 00:30:20.677 "num_base_bdevs": 2, 00:30:20.677 "num_base_bdevs_discovered": 2, 00:30:20.677 "num_base_bdevs_operational": 2, 00:30:20.677 "process": { 00:30:20.677 "type": "rebuild", 00:30:20.677 "target": "spare", 00:30:20.677 "progress": { 00:30:20.677 "blocks": 24576, 00:30:20.677 "percent": 38 00:30:20.677 } 00:30:20.677 }, 00:30:20.677 "base_bdevs_list": [ 00:30:20.677 { 00:30:20.677 "name": "spare", 00:30:20.677 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:20.677 "is_configured": true, 00:30:20.677 "data_offset": 2048, 00:30:20.677 "data_size": 63488 00:30:20.677 }, 00:30:20.677 { 00:30:20.677 "name": "BaseBdev2", 00:30:20.677 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:20.677 "is_configured": true, 00:30:20.677 "data_offset": 2048, 00:30:20.677 "data_size": 63488 00:30:20.677 } 00:30:20.677 ] 00:30:20.677 }' 00:30:20.677 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:20.677 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:20.677 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:20.958 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=914 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.958 08:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.231 08:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:21.231 "name": "raid_bdev1", 00:30:21.231 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:21.231 "strip_size_kb": 0, 00:30:21.231 "state": "online", 00:30:21.231 "raid_level": "raid1", 00:30:21.231 "superblock": true, 00:30:21.231 "num_base_bdevs": 2, 00:30:21.231 "num_base_bdevs_discovered": 2, 00:30:21.231 "num_base_bdevs_operational": 2, 00:30:21.231 "process": { 00:30:21.231 "type": "rebuild", 00:30:21.231 "target": "spare", 00:30:21.231 "progress": { 00:30:21.231 "blocks": 32768, 00:30:21.231 "percent": 51 00:30:21.231 } 00:30:21.231 }, 00:30:21.231 "base_bdevs_list": [ 00:30:21.231 { 00:30:21.231 "name": "spare", 00:30:21.231 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:21.231 "is_configured": true, 00:30:21.231 "data_offset": 2048, 00:30:21.231 "data_size": 63488 00:30:21.231 }, 00:30:21.231 { 00:30:21.231 "name": "BaseBdev2", 00:30:21.231 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:21.231 "is_configured": true, 00:30:21.231 "data_offset": 2048, 00:30:21.231 "data_size": 63488 00:30:21.231 } 00:30:21.231 ] 00:30:21.231 }' 00:30:21.231 08:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:21.231 08:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:21.231 08:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:21.231 08:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:21.231 08:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.166 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.425 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:22.425 "name": "raid_bdev1", 00:30:22.425 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:22.425 "strip_size_kb": 0, 00:30:22.425 "state": "online", 00:30:22.425 "raid_level": "raid1", 00:30:22.425 "superblock": true, 00:30:22.425 "num_base_bdevs": 2, 00:30:22.425 "num_base_bdevs_discovered": 2, 00:30:22.425 "num_base_bdevs_operational": 2, 00:30:22.425 "process": { 00:30:22.425 "type": "rebuild", 00:30:22.425 "target": "spare", 00:30:22.425 "progress": { 00:30:22.425 "blocks": 59392, 00:30:22.425 "percent": 93 00:30:22.425 } 00:30:22.426 }, 00:30:22.426 "base_bdevs_list": [ 00:30:22.426 { 00:30:22.426 "name": "spare", 00:30:22.426 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:22.426 "is_configured": true, 00:30:22.426 "data_offset": 2048, 00:30:22.426 "data_size": 63488 00:30:22.426 }, 00:30:22.426 { 00:30:22.426 "name": "BaseBdev2", 00:30:22.426 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:22.426 "is_configured": true, 00:30:22.426 "data_offset": 2048, 00:30:22.426 "data_size": 63488 00:30:22.426 } 00:30:22.426 ] 00:30:22.426 }' 00:30:22.426 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:22.426 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:22.426 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:22.685 [2024-07-12 08:56:57.645689] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:22.685 [2024-07-12 08:56:57.645796] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:22.685 [2024-07-12 08:56:57.645986] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:22.685 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:22.685 08:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.621 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.879 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.879 "name": "raid_bdev1", 00:30:23.879 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:23.879 "strip_size_kb": 0, 00:30:23.879 "state": "online", 00:30:23.879 "raid_level": "raid1", 00:30:23.879 "superblock": true, 00:30:23.879 "num_base_bdevs": 2, 00:30:23.879 "num_base_bdevs_discovered": 2, 00:30:23.879 "num_base_bdevs_operational": 2, 00:30:23.879 "base_bdevs_list": [ 00:30:23.879 { 00:30:23.879 "name": "spare", 00:30:23.879 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:23.879 "is_configured": true, 00:30:23.879 "data_offset": 2048, 00:30:23.879 "data_size": 63488 00:30:23.879 }, 00:30:23.879 { 00:30:23.879 "name": "BaseBdev2", 00:30:23.879 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:23.879 "is_configured": true, 00:30:23.879 "data_offset": 2048, 00:30:23.879 "data_size": 63488 00:30:23.879 } 00:30:23.879 ] 00:30:23.879 }' 00:30:23.879 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.879 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:23.879 08:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.879 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.138 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.138 "name": "raid_bdev1", 00:30:24.138 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:24.138 "strip_size_kb": 0, 00:30:24.138 "state": "online", 00:30:24.138 "raid_level": "raid1", 00:30:24.138 "superblock": true, 00:30:24.138 "num_base_bdevs": 2, 00:30:24.138 "num_base_bdevs_discovered": 2, 00:30:24.138 "num_base_bdevs_operational": 2, 00:30:24.138 "base_bdevs_list": [ 00:30:24.138 { 00:30:24.138 "name": "spare", 00:30:24.138 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:24.138 "is_configured": true, 00:30:24.138 "data_offset": 2048, 00:30:24.138 "data_size": 63488 00:30:24.138 }, 00:30:24.138 { 00:30:24.138 "name": "BaseBdev2", 00:30:24.138 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:24.138 "is_configured": true, 00:30:24.138 "data_offset": 2048, 00:30:24.138 "data_size": 63488 00:30:24.138 } 00:30:24.138 ] 00:30:24.138 }' 00:30:24.138 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:24.397 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:24.398 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:24.398 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.398 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.656 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.656 "name": "raid_bdev1", 00:30:24.656 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:24.656 "strip_size_kb": 0, 00:30:24.656 "state": "online", 00:30:24.656 "raid_level": "raid1", 00:30:24.656 "superblock": true, 00:30:24.656 "num_base_bdevs": 2, 00:30:24.656 "num_base_bdevs_discovered": 2, 00:30:24.656 "num_base_bdevs_operational": 2, 00:30:24.656 "base_bdevs_list": [ 00:30:24.656 { 00:30:24.656 "name": "spare", 00:30:24.656 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:24.656 "is_configured": true, 00:30:24.656 "data_offset": 2048, 00:30:24.656 "data_size": 63488 00:30:24.656 }, 00:30:24.656 { 00:30:24.656 "name": "BaseBdev2", 00:30:24.656 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:24.656 "is_configured": true, 00:30:24.656 "data_offset": 2048, 00:30:24.656 "data_size": 63488 00:30:24.656 } 00:30:24.656 ] 00:30:24.656 }' 00:30:24.656 08:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.656 08:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.223 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:25.481 [2024-07-12 08:57:00.561963] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:25.481 [2024-07-12 08:57:00.562005] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:25.481 [2024-07-12 08:57:00.562111] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:25.481 [2024-07-12 08:57:00.562204] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:25.481 [2024-07-12 08:57:00.562218] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:30:25.481 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.481 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:25.740 08:57:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:25.999 /dev/nbd0 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:25.999 1+0 records in 00:30:25.999 1+0 records out 00:30:25.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586198 s, 7.0 MB/s 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:25.999 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:26.565 /dev/nbd1 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:26.565 1+0 records in 00:30:26.565 1+0 records out 00:30:26.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470715 s, 8.7 MB/s 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:26.565 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:26.823 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:26.823 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:26.823 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:26.823 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:26.823 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:26.823 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:26.823 08:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:27.080 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:27.080 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:27.080 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:27.080 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:27.080 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:27.080 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:27.080 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:27.338 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:27.338 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:27.338 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:27.338 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:27.338 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:27.338 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:27.338 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:27.339 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:27.339 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:27.339 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:27.598 [2024-07-12 08:57:02.772910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:27.598 [2024-07-12 08:57:02.773032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:27.598 [2024-07-12 08:57:02.773097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:27.598 [2024-07-12 08:57:02.773127] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:27.598 [2024-07-12 08:57:02.775720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:27.598 [2024-07-12 08:57:02.775790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:27.598 [2024-07-12 08:57:02.775921] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:27.598 [2024-07-12 08:57:02.776056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:27.598 [2024-07-12 08:57:02.776243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:27.598 spare 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.598 08:57:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.857 [2024-07-12 08:57:02.876373] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:30:27.857 [2024-07-12 08:57:02.876421] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:27.857 [2024-07-12 08:57:02.876643] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5be0 00:30:27.857 [2024-07-12 08:57:02.877127] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:30:27.857 [2024-07-12 08:57:02.877149] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:30:27.857 [2024-07-12 08:57:02.877342] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:27.857 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:27.857 "name": "raid_bdev1", 00:30:27.857 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:27.857 "strip_size_kb": 0, 00:30:27.857 "state": "online", 00:30:27.857 "raid_level": "raid1", 00:30:27.857 "superblock": true, 00:30:27.857 "num_base_bdevs": 2, 00:30:27.857 "num_base_bdevs_discovered": 2, 00:30:27.857 "num_base_bdevs_operational": 2, 00:30:27.857 "base_bdevs_list": [ 00:30:27.857 { 00:30:27.857 "name": "spare", 00:30:27.857 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:27.857 "is_configured": true, 00:30:27.857 "data_offset": 2048, 00:30:27.857 "data_size": 63488 00:30:27.857 }, 00:30:27.857 { 00:30:27.857 "name": "BaseBdev2", 00:30:27.857 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:27.857 "is_configured": true, 00:30:27.857 "data_offset": 2048, 00:30:27.857 "data_size": 63488 00:30:27.857 } 00:30:27.857 ] 00:30:27.857 }' 00:30:27.857 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:27.857 08:57:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:28.793 "name": "raid_bdev1", 00:30:28.793 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:28.793 "strip_size_kb": 0, 00:30:28.793 "state": "online", 00:30:28.793 "raid_level": "raid1", 00:30:28.793 "superblock": true, 00:30:28.793 "num_base_bdevs": 2, 00:30:28.793 "num_base_bdevs_discovered": 2, 00:30:28.793 "num_base_bdevs_operational": 2, 00:30:28.793 "base_bdevs_list": [ 00:30:28.793 { 00:30:28.793 "name": "spare", 00:30:28.793 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:28.793 "is_configured": true, 00:30:28.793 "data_offset": 2048, 00:30:28.793 "data_size": 63488 00:30:28.793 }, 00:30:28.793 { 00:30:28.793 "name": "BaseBdev2", 00:30:28.793 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:28.793 "is_configured": true, 00:30:28.793 "data_offset": 2048, 00:30:28.793 "data_size": 63488 00:30:28.793 } 00:30:28.793 ] 00:30:28.793 }' 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:28.793 08:57:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:29.051 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:29.051 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.051 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:29.309 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:29.309 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:29.309 [2024-07-12 08:57:04.493750] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.568 "name": "raid_bdev1", 00:30:29.568 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:29.568 "strip_size_kb": 0, 00:30:29.568 "state": "online", 00:30:29.568 "raid_level": "raid1", 00:30:29.568 "superblock": true, 00:30:29.568 "num_base_bdevs": 2, 00:30:29.568 "num_base_bdevs_discovered": 1, 00:30:29.568 "num_base_bdevs_operational": 1, 00:30:29.568 "base_bdevs_list": [ 00:30:29.568 { 00:30:29.568 "name": null, 00:30:29.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.568 "is_configured": false, 00:30:29.568 "data_offset": 2048, 00:30:29.568 "data_size": 63488 00:30:29.568 }, 00:30:29.568 { 00:30:29.568 "name": "BaseBdev2", 00:30:29.568 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:29.568 "is_configured": true, 00:30:29.568 "data_offset": 2048, 00:30:29.568 "data_size": 63488 00:30:29.568 } 00:30:29.568 ] 00:30:29.568 }' 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.568 08:57:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:30.502 08:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:30.502 [2024-07-12 08:57:05.646057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:30.502 [2024-07-12 08:57:05.646315] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:30.502 [2024-07-12 08:57:05.646331] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:30.502 [2024-07-12 08:57:05.646450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:30.502 [2024-07-12 08:57:05.659475] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5d80 00:30:30.502 [2024-07-12 08:57:05.661632] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:30.502 08:57:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.876 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:31.877 "name": "raid_bdev1", 00:30:31.877 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:31.877 "strip_size_kb": 0, 00:30:31.877 "state": "online", 00:30:31.877 "raid_level": "raid1", 00:30:31.877 "superblock": true, 00:30:31.877 "num_base_bdevs": 2, 00:30:31.877 "num_base_bdevs_discovered": 2, 00:30:31.877 "num_base_bdevs_operational": 2, 00:30:31.877 "process": { 00:30:31.877 "type": "rebuild", 00:30:31.877 "target": "spare", 00:30:31.877 "progress": { 00:30:31.877 "blocks": 24576, 00:30:31.877 "percent": 38 00:30:31.877 } 00:30:31.877 }, 00:30:31.877 "base_bdevs_list": [ 00:30:31.877 { 00:30:31.877 "name": "spare", 00:30:31.877 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:31.877 "is_configured": true, 00:30:31.877 "data_offset": 2048, 00:30:31.877 "data_size": 63488 00:30:31.877 }, 00:30:31.877 { 00:30:31.877 "name": "BaseBdev2", 00:30:31.877 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:31.877 "is_configured": true, 00:30:31.877 "data_offset": 2048, 00:30:31.877 "data_size": 63488 00:30:31.877 } 00:30:31.877 ] 00:30:31.877 }' 00:30:31.877 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:31.877 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:31.877 08:57:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:31.877 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:31.877 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:32.135 [2024-07-12 08:57:07.291626] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:32.394 [2024-07-12 08:57:07.372821] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:32.394 [2024-07-12 08:57:07.372978] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:32.394 [2024-07-12 08:57:07.372999] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:32.394 [2024-07-12 08:57:07.373009] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.394 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.653 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:32.653 "name": "raid_bdev1", 00:30:32.653 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:32.653 "strip_size_kb": 0, 00:30:32.653 "state": "online", 00:30:32.653 "raid_level": "raid1", 00:30:32.653 "superblock": true, 00:30:32.653 "num_base_bdevs": 2, 00:30:32.653 "num_base_bdevs_discovered": 1, 00:30:32.653 "num_base_bdevs_operational": 1, 00:30:32.653 "base_bdevs_list": [ 00:30:32.653 { 00:30:32.653 "name": null, 00:30:32.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.653 "is_configured": false, 00:30:32.653 "data_offset": 2048, 00:30:32.653 "data_size": 63488 00:30:32.653 }, 00:30:32.653 { 00:30:32.653 "name": "BaseBdev2", 00:30:32.653 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:32.653 "is_configured": true, 00:30:32.653 "data_offset": 2048, 00:30:32.653 "data_size": 63488 00:30:32.653 } 00:30:32.653 ] 00:30:32.653 }' 00:30:32.653 08:57:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:32.653 08:57:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:33.221 08:57:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:33.480 [2024-07-12 08:57:08.612089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:33.480 [2024-07-12 08:57:08.612242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.480 [2024-07-12 08:57:08.612299] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:33.480 [2024-07-12 08:57:08.612331] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.480 [2024-07-12 08:57:08.612959] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.480 [2024-07-12 08:57:08.613005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:33.480 [2024-07-12 08:57:08.613132] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:33.480 [2024-07-12 08:57:08.613148] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:33.480 [2024-07-12 08:57:08.613158] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:33.480 [2024-07-12 08:57:08.613210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:33.480 [2024-07-12 08:57:08.626686] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc60c0 00:30:33.480 spare 00:30:33.480 [2024-07-12 08:57:08.628771] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:33.480 08:57:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:34.856 "name": "raid_bdev1", 00:30:34.856 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:34.856 "strip_size_kb": 0, 00:30:34.856 "state": "online", 00:30:34.856 "raid_level": "raid1", 00:30:34.856 "superblock": true, 00:30:34.856 "num_base_bdevs": 2, 00:30:34.856 "num_base_bdevs_discovered": 2, 00:30:34.856 "num_base_bdevs_operational": 2, 00:30:34.856 "process": { 00:30:34.856 "type": "rebuild", 00:30:34.856 "target": "spare", 00:30:34.856 "progress": { 00:30:34.856 "blocks": 24576, 00:30:34.856 "percent": 38 00:30:34.856 } 00:30:34.856 }, 00:30:34.856 "base_bdevs_list": [ 00:30:34.856 { 00:30:34.856 "name": "spare", 00:30:34.856 "uuid": "ddac7706-6716-5ae9-a12c-e4b4886e985b", 00:30:34.856 "is_configured": true, 00:30:34.856 "data_offset": 2048, 00:30:34.856 "data_size": 63488 00:30:34.856 }, 00:30:34.856 { 00:30:34.856 "name": "BaseBdev2", 00:30:34.856 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:34.856 "is_configured": true, 00:30:34.856 "data_offset": 2048, 00:30:34.856 "data_size": 63488 00:30:34.856 } 00:30:34.856 ] 00:30:34.856 }' 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:34.856 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:34.857 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:34.857 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:34.857 08:57:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:35.114 [2024-07-12 08:57:10.250824] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:35.373 [2024-07-12 08:57:10.339936] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:35.373 [2024-07-12 08:57:10.340084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.373 [2024-07-12 08:57:10.340106] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:35.373 [2024-07-12 08:57:10.340118] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.373 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.631 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:35.631 "name": "raid_bdev1", 00:30:35.631 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:35.631 "strip_size_kb": 0, 00:30:35.631 "state": "online", 00:30:35.631 "raid_level": "raid1", 00:30:35.631 "superblock": true, 00:30:35.631 "num_base_bdevs": 2, 00:30:35.631 "num_base_bdevs_discovered": 1, 00:30:35.631 "num_base_bdevs_operational": 1, 00:30:35.631 "base_bdevs_list": [ 00:30:35.631 { 00:30:35.631 "name": null, 00:30:35.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.631 "is_configured": false, 00:30:35.631 "data_offset": 2048, 00:30:35.631 "data_size": 63488 00:30:35.631 }, 00:30:35.631 { 00:30:35.631 "name": "BaseBdev2", 00:30:35.631 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:35.631 "is_configured": true, 00:30:35.631 "data_offset": 2048, 00:30:35.631 "data_size": 63488 00:30:35.631 } 00:30:35.631 ] 00:30:35.631 }' 00:30:35.631 08:57:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:35.631 08:57:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.199 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:36.199 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:36.199 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:36.199 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:36.199 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:36.199 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.199 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.458 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:36.458 "name": "raid_bdev1", 00:30:36.458 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:36.458 "strip_size_kb": 0, 00:30:36.458 "state": "online", 00:30:36.458 "raid_level": "raid1", 00:30:36.458 "superblock": true, 00:30:36.458 "num_base_bdevs": 2, 00:30:36.458 "num_base_bdevs_discovered": 1, 00:30:36.458 "num_base_bdevs_operational": 1, 00:30:36.458 "base_bdevs_list": [ 00:30:36.458 { 00:30:36.458 "name": null, 00:30:36.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.458 "is_configured": false, 00:30:36.458 "data_offset": 2048, 00:30:36.458 "data_size": 63488 00:30:36.458 }, 00:30:36.458 { 00:30:36.458 "name": "BaseBdev2", 00:30:36.458 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:36.458 "is_configured": true, 00:30:36.458 "data_offset": 2048, 00:30:36.458 "data_size": 63488 00:30:36.458 } 00:30:36.458 ] 00:30:36.458 }' 00:30:36.458 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:36.717 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:36.717 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:36.717 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:36.717 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:36.975 08:57:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:37.233 [2024-07-12 08:57:12.239129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:37.233 [2024-07-12 08:57:12.239258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:37.233 [2024-07-12 08:57:12.239304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:30:37.233 [2024-07-12 08:57:12.239329] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:37.233 [2024-07-12 08:57:12.239912] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:37.233 [2024-07-12 08:57:12.239957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:37.234 [2024-07-12 08:57:12.240093] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:37.234 [2024-07-12 08:57:12.240112] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:37.234 [2024-07-12 08:57:12.240121] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:37.234 BaseBdev1 00:30:37.234 08:57:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.192 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.451 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.451 "name": "raid_bdev1", 00:30:38.451 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:38.451 "strip_size_kb": 0, 00:30:38.451 "state": "online", 00:30:38.451 "raid_level": "raid1", 00:30:38.451 "superblock": true, 00:30:38.451 "num_base_bdevs": 2, 00:30:38.451 "num_base_bdevs_discovered": 1, 00:30:38.451 "num_base_bdevs_operational": 1, 00:30:38.451 "base_bdevs_list": [ 00:30:38.451 { 00:30:38.451 "name": null, 00:30:38.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.451 "is_configured": false, 00:30:38.451 "data_offset": 2048, 00:30:38.451 "data_size": 63488 00:30:38.451 }, 00:30:38.451 { 00:30:38.451 "name": "BaseBdev2", 00:30:38.451 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:38.451 "is_configured": true, 00:30:38.451 "data_offset": 2048, 00:30:38.451 "data_size": 63488 00:30:38.451 } 00:30:38.451 ] 00:30:38.451 }' 00:30:38.451 08:57:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.451 08:57:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:39.384 "name": "raid_bdev1", 00:30:39.384 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:39.384 "strip_size_kb": 0, 00:30:39.384 "state": "online", 00:30:39.384 "raid_level": "raid1", 00:30:39.384 "superblock": true, 00:30:39.384 "num_base_bdevs": 2, 00:30:39.384 "num_base_bdevs_discovered": 1, 00:30:39.384 "num_base_bdevs_operational": 1, 00:30:39.384 "base_bdevs_list": [ 00:30:39.384 { 00:30:39.384 "name": null, 00:30:39.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.384 "is_configured": false, 00:30:39.384 "data_offset": 2048, 00:30:39.384 "data_size": 63488 00:30:39.384 }, 00:30:39.384 { 00:30:39.384 "name": "BaseBdev2", 00:30:39.384 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:39.384 "is_configured": true, 00:30:39.384 "data_offset": 2048, 00:30:39.384 "data_size": 63488 00:30:39.384 } 00:30:39.384 ] 00:30:39.384 }' 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:39.384 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:39.643 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:39.643 [2024-07-12 08:57:14.831659] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:39.643 [2024-07-12 08:57:14.831857] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:39.643 [2024-07-12 08:57:14.831873] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:39.643 request: 00:30:39.643 { 00:30:39.643 "base_bdev": "BaseBdev1", 00:30:39.643 "raid_bdev": "raid_bdev1", 00:30:39.643 "method": "bdev_raid_add_base_bdev", 00:30:39.643 "req_id": 1 00:30:39.643 } 00:30:39.643 Got JSON-RPC error response 00:30:39.643 response: 00:30:39.643 { 00:30:39.643 "code": -22, 00:30:39.643 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:39.643 } 00:30:39.901 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:30:39.901 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:39.901 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:39.901 08:57:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:39.901 08:57:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.837 08:57:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.096 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.096 "name": "raid_bdev1", 00:30:41.096 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:41.096 "strip_size_kb": 0, 00:30:41.096 "state": "online", 00:30:41.096 "raid_level": "raid1", 00:30:41.096 "superblock": true, 00:30:41.096 "num_base_bdevs": 2, 00:30:41.096 "num_base_bdevs_discovered": 1, 00:30:41.096 "num_base_bdevs_operational": 1, 00:30:41.096 "base_bdevs_list": [ 00:30:41.096 { 00:30:41.096 "name": null, 00:30:41.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.096 "is_configured": false, 00:30:41.096 "data_offset": 2048, 00:30:41.096 "data_size": 63488 00:30:41.096 }, 00:30:41.096 { 00:30:41.096 "name": "BaseBdev2", 00:30:41.096 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:41.096 "is_configured": true, 00:30:41.096 "data_offset": 2048, 00:30:41.096 "data_size": 63488 00:30:41.096 } 00:30:41.096 ] 00:30:41.096 }' 00:30:41.096 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.096 08:57:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.662 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:41.662 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:41.662 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:41.662 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:41.663 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:41.663 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.663 08:57:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.920 08:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:41.920 "name": "raid_bdev1", 00:30:41.920 "uuid": "f14f12b3-1353-40b4-b83c-a74a1b78d4bd", 00:30:41.920 "strip_size_kb": 0, 00:30:41.920 "state": "online", 00:30:41.920 "raid_level": "raid1", 00:30:41.920 "superblock": true, 00:30:41.920 "num_base_bdevs": 2, 00:30:41.920 "num_base_bdevs_discovered": 1, 00:30:41.920 "num_base_bdevs_operational": 1, 00:30:41.920 "base_bdevs_list": [ 00:30:41.920 { 00:30:41.920 "name": null, 00:30:41.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.920 "is_configured": false, 00:30:41.920 "data_offset": 2048, 00:30:41.920 "data_size": 63488 00:30:41.920 }, 00:30:41.920 { 00:30:41.920 "name": "BaseBdev2", 00:30:41.920 "uuid": "18dfc252-33ab-594e-b341-30debd99de9d", 00:30:41.920 "is_configured": true, 00:30:41.920 "data_offset": 2048, 00:30:41.920 "data_size": 63488 00:30:41.920 } 00:30:41.920 ] 00:30:41.920 }' 00:30:41.920 08:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 146818 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 146818 ']' 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 146818 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146818 00:30:42.178 killing process with pid 146818 00:30:42.178 Received shutdown signal, test time was about 60.000000 seconds 00:30:42.178 00:30:42.178 Latency(us) 00:30:42.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.178 =================================================================================================================== 00:30:42.178 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146818' 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 146818 00:30:42.178 08:57:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 146818 00:30:42.178 [2024-07-12 08:57:17.227616] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:42.178 [2024-07-12 08:57:17.227799] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:42.178 [2024-07-12 08:57:17.227866] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:42.178 [2024-07-12 08:57:17.227879] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:30:42.437 [2024-07-12 08:57:17.453220] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:43.374 ************************************ 00:30:43.374 END TEST raid_rebuild_test_sb 00:30:43.374 ************************************ 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:30:43.374 00:30:43.374 real 0m39.406s 00:30:43.374 user 0m59.691s 00:30:43.374 sys 0m5.379s 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.374 08:57:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:43.374 08:57:18 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:30:43.374 08:57:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:43.374 08:57:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.374 08:57:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:43.374 ************************************ 00:30:43.374 START TEST raid_rebuild_test_io 00:30:43.374 ************************************ 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:43.374 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=147851 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 147851 /var/tmp/spdk-raid.sock 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 147851 ']' 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.633 08:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.633 [2024-07-12 08:57:18.638040] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:30:43.633 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:43.633 Zero copy mechanism will not be used. 00:30:43.633 [2024-07-12 08:57:18.638278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147851 ] 00:30:43.633 [2024-07-12 08:57:18.808908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.892 [2024-07-12 08:57:19.007758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.150 [2024-07-12 08:57:19.186606] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:44.718 08:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.718 08:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:30:44.718 08:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:44.718 08:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:44.718 BaseBdev1_malloc 00:30:44.718 08:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:44.976 [2024-07-12 08:57:20.117876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:44.976 [2024-07-12 08:57:20.118044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.976 [2024-07-12 08:57:20.118088] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:30:44.977 [2024-07-12 08:57:20.118111] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.977 [2024-07-12 08:57:20.120599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.977 [2024-07-12 08:57:20.120680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:44.977 BaseBdev1 00:30:44.977 08:57:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:44.977 08:57:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:45.236 BaseBdev2_malloc 00:30:45.236 08:57:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:45.494 [2024-07-12 08:57:20.611558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:45.494 [2024-07-12 08:57:20.611725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:45.494 [2024-07-12 08:57:20.611771] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:30:45.494 [2024-07-12 08:57:20.611794] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:45.494 [2024-07-12 08:57:20.614316] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:45.494 [2024-07-12 08:57:20.614385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:45.494 BaseBdev2 00:30:45.494 08:57:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:45.752 spare_malloc 00:30:45.752 08:57:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:46.033 spare_delay 00:30:46.033 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:46.291 [2024-07-12 08:57:21.381925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:46.291 [2024-07-12 08:57:21.382085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:46.291 [2024-07-12 08:57:21.382129] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:30:46.291 [2024-07-12 08:57:21.382158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:46.291 [2024-07-12 08:57:21.384731] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:46.291 [2024-07-12 08:57:21.384802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:46.291 spare 00:30:46.291 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:46.550 [2024-07-12 08:57:21.658030] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:46.550 [2024-07-12 08:57:21.660149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:46.550 [2024-07-12 08:57:21.660302] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:30:46.550 [2024-07-12 08:57:21.660317] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:46.550 [2024-07-12 08:57:21.660532] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:30:46.550 [2024-07-12 08:57:21.660945] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:30:46.550 [2024-07-12 08:57:21.660969] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:30:46.550 [2024-07-12 08:57:21.661191] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.550 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.808 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:46.808 "name": "raid_bdev1", 00:30:46.808 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:46.808 "strip_size_kb": 0, 00:30:46.808 "state": "online", 00:30:46.808 "raid_level": "raid1", 00:30:46.808 "superblock": false, 00:30:46.808 "num_base_bdevs": 2, 00:30:46.808 "num_base_bdevs_discovered": 2, 00:30:46.808 "num_base_bdevs_operational": 2, 00:30:46.808 "base_bdevs_list": [ 00:30:46.808 { 00:30:46.808 "name": "BaseBdev1", 00:30:46.808 "uuid": "304d1f48-7ce8-523d-9838-ef2f393c8a7f", 00:30:46.808 "is_configured": true, 00:30:46.808 "data_offset": 0, 00:30:46.808 "data_size": 65536 00:30:46.808 }, 00:30:46.808 { 00:30:46.808 "name": "BaseBdev2", 00:30:46.808 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:46.808 "is_configured": true, 00:30:46.808 "data_offset": 0, 00:30:46.808 "data_size": 65536 00:30:46.808 } 00:30:46.808 ] 00:30:46.808 }' 00:30:46.808 08:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:46.808 08:57:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.744 08:57:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:47.744 08:57:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:47.744 [2024-07-12 08:57:22.850586] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:47.744 08:57:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:30:47.744 08:57:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.744 08:57:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:48.003 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:30:48.003 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:48.003 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:48.003 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:48.262 [2024-07-12 08:57:23.233629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:48.262 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:48.262 Zero copy mechanism will not be used. 00:30:48.262 Running I/O for 60 seconds... 00:30:48.262 [2024-07-12 08:57:23.397338] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:48.262 [2024-07-12 08:57:23.397586] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.262 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.520 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.520 "name": "raid_bdev1", 00:30:48.520 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:48.520 "strip_size_kb": 0, 00:30:48.520 "state": "online", 00:30:48.520 "raid_level": "raid1", 00:30:48.520 "superblock": false, 00:30:48.520 "num_base_bdevs": 2, 00:30:48.520 "num_base_bdevs_discovered": 1, 00:30:48.520 "num_base_bdevs_operational": 1, 00:30:48.520 "base_bdevs_list": [ 00:30:48.520 { 00:30:48.520 "name": null, 00:30:48.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.520 "is_configured": false, 00:30:48.520 "data_offset": 0, 00:30:48.520 "data_size": 65536 00:30:48.520 }, 00:30:48.520 { 00:30:48.520 "name": "BaseBdev2", 00:30:48.520 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:48.520 "is_configured": true, 00:30:48.520 "data_offset": 0, 00:30:48.520 "data_size": 65536 00:30:48.520 } 00:30:48.520 ] 00:30:48.520 }' 00:30:48.520 08:57:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.520 08:57:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:49.455 08:57:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:49.714 [2024-07-12 08:57:24.691901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:49.714 08:57:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:49.714 [2024-07-12 08:57:24.759241] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:49.714 [2024-07-12 08:57:24.761489] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:49.714 [2024-07-12 08:57:24.877088] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:49.714 [2024-07-12 08:57:24.877859] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:49.973 [2024-07-12 08:57:25.112602] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:50.541 [2024-07-12 08:57:25.604364] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:50.541 [2024-07-12 08:57:25.604662] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:50.800 08:57:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:50.800 08:57:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:50.800 08:57:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:50.800 08:57:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:50.800 08:57:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:50.800 08:57:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.800 08:57:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.800 [2024-07-12 08:57:25.934473] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:51.059 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:51.059 "name": "raid_bdev1", 00:30:51.059 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:51.059 "strip_size_kb": 0, 00:30:51.059 "state": "online", 00:30:51.059 "raid_level": "raid1", 00:30:51.059 "superblock": false, 00:30:51.059 "num_base_bdevs": 2, 00:30:51.059 "num_base_bdevs_discovered": 2, 00:30:51.059 "num_base_bdevs_operational": 2, 00:30:51.059 "process": { 00:30:51.059 "type": "rebuild", 00:30:51.059 "target": "spare", 00:30:51.059 "progress": { 00:30:51.059 "blocks": 14336, 00:30:51.059 "percent": 21 00:30:51.059 } 00:30:51.059 }, 00:30:51.059 "base_bdevs_list": [ 00:30:51.059 { 00:30:51.059 "name": "spare", 00:30:51.059 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:51.059 "is_configured": true, 00:30:51.059 "data_offset": 0, 00:30:51.059 "data_size": 65536 00:30:51.059 }, 00:30:51.059 { 00:30:51.059 "name": "BaseBdev2", 00:30:51.059 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:51.059 "is_configured": true, 00:30:51.059 "data_offset": 0, 00:30:51.059 "data_size": 65536 00:30:51.059 } 00:30:51.059 ] 00:30:51.059 }' 00:30:51.059 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:51.059 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:51.059 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:51.059 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:51.059 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:51.059 [2024-07-12 08:57:26.144298] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:51.060 [2024-07-12 08:57:26.144684] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:51.318 [2024-07-12 08:57:26.383818] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:51.318 [2024-07-12 08:57:26.482587] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:51.578 [2024-07-12 08:57:26.583812] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:51.578 [2024-07-12 08:57:26.595405] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:51.578 [2024-07-12 08:57:26.595472] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:51.578 [2024-07-12 08:57:26.595485] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:51.578 [2024-07-12 08:57:26.634371] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.578 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.837 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.837 "name": "raid_bdev1", 00:30:51.837 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:51.837 "strip_size_kb": 0, 00:30:51.837 "state": "online", 00:30:51.837 "raid_level": "raid1", 00:30:51.837 "superblock": false, 00:30:51.837 "num_base_bdevs": 2, 00:30:51.837 "num_base_bdevs_discovered": 1, 00:30:51.837 "num_base_bdevs_operational": 1, 00:30:51.837 "base_bdevs_list": [ 00:30:51.837 { 00:30:51.837 "name": null, 00:30:51.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.837 "is_configured": false, 00:30:51.837 "data_offset": 0, 00:30:51.837 "data_size": 65536 00:30:51.837 }, 00:30:51.837 { 00:30:51.837 "name": "BaseBdev2", 00:30:51.837 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:51.837 "is_configured": true, 00:30:51.837 "data_offset": 0, 00:30:51.837 "data_size": 65536 00:30:51.837 } 00:30:51.837 ] 00:30:51.837 }' 00:30:51.837 08:57:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.837 08:57:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.774 "name": "raid_bdev1", 00:30:52.774 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:52.774 "strip_size_kb": 0, 00:30:52.774 "state": "online", 00:30:52.774 "raid_level": "raid1", 00:30:52.774 "superblock": false, 00:30:52.774 "num_base_bdevs": 2, 00:30:52.774 "num_base_bdevs_discovered": 1, 00:30:52.774 "num_base_bdevs_operational": 1, 00:30:52.774 "base_bdevs_list": [ 00:30:52.774 { 00:30:52.774 "name": null, 00:30:52.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.774 "is_configured": false, 00:30:52.774 "data_offset": 0, 00:30:52.774 "data_size": 65536 00:30:52.774 }, 00:30:52.774 { 00:30:52.774 "name": "BaseBdev2", 00:30:52.774 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:52.774 "is_configured": true, 00:30:52.774 "data_offset": 0, 00:30:52.774 "data_size": 65536 00:30:52.774 } 00:30:52.774 ] 00:30:52.774 }' 00:30:52.774 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:53.033 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:53.033 08:57:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:53.033 08:57:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:53.033 08:57:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:53.292 [2024-07-12 08:57:28.278669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:53.292 08:57:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:53.293 [2024-07-12 08:57:28.347279] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:53.293 [2024-07-12 08:57:28.349403] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:53.293 [2024-07-12 08:57:28.478532] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:53.293 [2024-07-12 08:57:28.479213] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:53.551 [2024-07-12 08:57:28.688098] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:53.551 [2024-07-12 08:57:28.688493] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:54.125 [2024-07-12 08:57:29.026716] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:54.125 [2024-07-12 08:57:29.027378] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:54.125 [2024-07-12 08:57:29.246392] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:54.125 [2024-07-12 08:57:29.246759] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:54.384 "name": "raid_bdev1", 00:30:54.384 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:54.384 "strip_size_kb": 0, 00:30:54.384 "state": "online", 00:30:54.384 "raid_level": "raid1", 00:30:54.384 "superblock": false, 00:30:54.384 "num_base_bdevs": 2, 00:30:54.384 "num_base_bdevs_discovered": 2, 00:30:54.384 "num_base_bdevs_operational": 2, 00:30:54.384 "process": { 00:30:54.384 "type": "rebuild", 00:30:54.384 "target": "spare", 00:30:54.384 "progress": { 00:30:54.384 "blocks": 12288, 00:30:54.384 "percent": 18 00:30:54.384 } 00:30:54.384 }, 00:30:54.384 "base_bdevs_list": [ 00:30:54.384 { 00:30:54.384 "name": "spare", 00:30:54.384 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:54.384 "is_configured": true, 00:30:54.384 "data_offset": 0, 00:30:54.384 "data_size": 65536 00:30:54.384 }, 00:30:54.384 { 00:30:54.384 "name": "BaseBdev2", 00:30:54.384 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:54.384 "is_configured": true, 00:30:54.384 "data_offset": 0, 00:30:54.384 "data_size": 65536 00:30:54.384 } 00:30:54.384 ] 00:30:54.384 }' 00:30:54.384 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:54.384 [2024-07-12 08:57:29.569087] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:54.384 [2024-07-12 08:57:29.569824] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=948 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.644 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.644 [2024-07-12 08:57:29.781028] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:54.903 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:54.903 "name": "raid_bdev1", 00:30:54.903 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:54.903 "strip_size_kb": 0, 00:30:54.903 "state": "online", 00:30:54.903 "raid_level": "raid1", 00:30:54.903 "superblock": false, 00:30:54.903 "num_base_bdevs": 2, 00:30:54.903 "num_base_bdevs_discovered": 2, 00:30:54.903 "num_base_bdevs_operational": 2, 00:30:54.903 "process": { 00:30:54.903 "type": "rebuild", 00:30:54.903 "target": "spare", 00:30:54.903 "progress": { 00:30:54.903 "blocks": 16384, 00:30:54.903 "percent": 25 00:30:54.903 } 00:30:54.903 }, 00:30:54.903 "base_bdevs_list": [ 00:30:54.903 { 00:30:54.903 "name": "spare", 00:30:54.903 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:54.903 "is_configured": true, 00:30:54.903 "data_offset": 0, 00:30:54.903 "data_size": 65536 00:30:54.903 }, 00:30:54.903 { 00:30:54.903 "name": "BaseBdev2", 00:30:54.903 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:54.903 "is_configured": true, 00:30:54.903 "data_offset": 0, 00:30:54.903 "data_size": 65536 00:30:54.903 } 00:30:54.903 ] 00:30:54.903 }' 00:30:54.903 08:57:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:54.903 08:57:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:54.903 08:57:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:54.903 08:57:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:54.903 08:57:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:55.162 [2024-07-12 08:57:30.135432] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:55.421 [2024-07-12 08:57:30.361819] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:55.421 [2024-07-12 08:57:30.362230] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:55.680 [2024-07-12 08:57:30.685100] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:55.680 [2024-07-12 08:57:30.810746] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:55.680 [2024-07-12 08:57:30.811141] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:55.939 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:55.939 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:55.939 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:55.939 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:55.939 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:55.939 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:55.939 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.940 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.199 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:56.199 "name": "raid_bdev1", 00:30:56.199 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:56.199 "strip_size_kb": 0, 00:30:56.199 "state": "online", 00:30:56.199 "raid_level": "raid1", 00:30:56.199 "superblock": false, 00:30:56.199 "num_base_bdevs": 2, 00:30:56.199 "num_base_bdevs_discovered": 2, 00:30:56.199 "num_base_bdevs_operational": 2, 00:30:56.199 "process": { 00:30:56.199 "type": "rebuild", 00:30:56.199 "target": "spare", 00:30:56.199 "progress": { 00:30:56.199 "blocks": 34816, 00:30:56.199 "percent": 53 00:30:56.199 } 00:30:56.199 }, 00:30:56.199 "base_bdevs_list": [ 00:30:56.199 { 00:30:56.199 "name": "spare", 00:30:56.199 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:56.199 "is_configured": true, 00:30:56.199 "data_offset": 0, 00:30:56.199 "data_size": 65536 00:30:56.199 }, 00:30:56.199 { 00:30:56.199 "name": "BaseBdev2", 00:30:56.199 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:56.199 "is_configured": true, 00:30:56.199 "data_offset": 0, 00:30:56.199 "data_size": 65536 00:30:56.199 } 00:30:56.199 ] 00:30:56.199 }' 00:30:56.199 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:56.458 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:56.458 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:56.458 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:56.458 08:57:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:56.458 [2024-07-12 08:57:31.635419] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:57.021 [2024-07-12 08:57:31.994524] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.279 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.538 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:57.538 "name": "raid_bdev1", 00:30:57.538 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:57.538 "strip_size_kb": 0, 00:30:57.538 "state": "online", 00:30:57.538 "raid_level": "raid1", 00:30:57.538 "superblock": false, 00:30:57.538 "num_base_bdevs": 2, 00:30:57.538 "num_base_bdevs_discovered": 2, 00:30:57.538 "num_base_bdevs_operational": 2, 00:30:57.538 "process": { 00:30:57.538 "type": "rebuild", 00:30:57.538 "target": "spare", 00:30:57.538 "progress": { 00:30:57.538 "blocks": 57344, 00:30:57.538 "percent": 87 00:30:57.538 } 00:30:57.538 }, 00:30:57.538 "base_bdevs_list": [ 00:30:57.538 { 00:30:57.538 "name": "spare", 00:30:57.538 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:57.538 "is_configured": true, 00:30:57.538 "data_offset": 0, 00:30:57.538 "data_size": 65536 00:30:57.538 }, 00:30:57.538 { 00:30:57.538 "name": "BaseBdev2", 00:30:57.538 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:57.538 "is_configured": true, 00:30:57.538 "data_offset": 0, 00:30:57.538 "data_size": 65536 00:30:57.538 } 00:30:57.538 ] 00:30:57.538 }' 00:30:57.538 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:57.797 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:57.797 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:57.797 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:57.797 08:57:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:58.055 [2024-07-12 08:57:33.111180] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:58.055 [2024-07-12 08:57:33.217815] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:58.055 [2024-07-12 08:57:33.219904] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.989 08:57:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.989 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:58.989 "name": "raid_bdev1", 00:30:58.989 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:58.989 "strip_size_kb": 0, 00:30:58.989 "state": "online", 00:30:58.989 "raid_level": "raid1", 00:30:58.989 "superblock": false, 00:30:58.989 "num_base_bdevs": 2, 00:30:58.989 "num_base_bdevs_discovered": 2, 00:30:58.989 "num_base_bdevs_operational": 2, 00:30:58.989 "base_bdevs_list": [ 00:30:58.989 { 00:30:58.989 "name": "spare", 00:30:58.989 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:58.989 "is_configured": true, 00:30:58.990 "data_offset": 0, 00:30:58.990 "data_size": 65536 00:30:58.990 }, 00:30:58.990 { 00:30:58.990 "name": "BaseBdev2", 00:30:58.990 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:58.990 "is_configured": true, 00:30:58.990 "data_offset": 0, 00:30:58.990 "data_size": 65536 00:30:58.990 } 00:30:58.990 ] 00:30:58.990 }' 00:30:58.990 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:58.990 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:58.990 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.248 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:59.507 "name": "raid_bdev1", 00:30:59.507 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:59.507 "strip_size_kb": 0, 00:30:59.507 "state": "online", 00:30:59.507 "raid_level": "raid1", 00:30:59.507 "superblock": false, 00:30:59.507 "num_base_bdevs": 2, 00:30:59.507 "num_base_bdevs_discovered": 2, 00:30:59.507 "num_base_bdevs_operational": 2, 00:30:59.507 "base_bdevs_list": [ 00:30:59.507 { 00:30:59.507 "name": "spare", 00:30:59.507 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:59.507 "is_configured": true, 00:30:59.507 "data_offset": 0, 00:30:59.507 "data_size": 65536 00:30:59.507 }, 00:30:59.507 { 00:30:59.507 "name": "BaseBdev2", 00:30:59.507 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:59.507 "is_configured": true, 00:30:59.507 "data_offset": 0, 00:30:59.507 "data_size": 65536 00:30:59.507 } 00:30:59.507 ] 00:30:59.507 }' 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.507 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.766 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.766 "name": "raid_bdev1", 00:30:59.766 "uuid": "5d329b78-0baf-4b37-9648-ce1761a9b644", 00:30:59.766 "strip_size_kb": 0, 00:30:59.766 "state": "online", 00:30:59.766 "raid_level": "raid1", 00:30:59.766 "superblock": false, 00:30:59.766 "num_base_bdevs": 2, 00:30:59.766 "num_base_bdevs_discovered": 2, 00:30:59.766 "num_base_bdevs_operational": 2, 00:30:59.766 "base_bdevs_list": [ 00:30:59.766 { 00:30:59.766 "name": "spare", 00:30:59.766 "uuid": "6ff9075f-16c6-55dd-b89d-bcbbd20dbaa0", 00:30:59.766 "is_configured": true, 00:30:59.766 "data_offset": 0, 00:30:59.766 "data_size": 65536 00:30:59.766 }, 00:30:59.766 { 00:30:59.766 "name": "BaseBdev2", 00:30:59.766 "uuid": "f23ea452-4bb2-5449-8683-569e86878ecc", 00:30:59.766 "is_configured": true, 00:30:59.766 "data_offset": 0, 00:30:59.766 "data_size": 65536 00:30:59.766 } 00:30:59.766 ] 00:30:59.766 }' 00:30:59.766 08:57:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.766 08:57:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:00.703 08:57:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:00.703 [2024-07-12 08:57:35.836078] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:00.703 [2024-07-12 08:57:35.836132] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:00.963 00:31:00.963 Latency(us) 00:31:00.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.963 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:00.963 raid_bdev1 : 12.70 98.74 296.22 0.00 0.00 13033.50 344.44 116296.61 00:31:00.963 =================================================================================================================== 00:31:00.963 Total : 98.74 296.22 0.00 0.00 13033.50 344.44 116296.61 00:31:00.963 [2024-07-12 08:57:35.953860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:00.963 [2024-07-12 08:57:35.953941] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:00.963 0 00:31:00.963 [2024-07-12 08:57:35.954040] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:00.963 [2024-07-12 08:57:35.954056] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:31:00.963 08:57:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.963 08:57:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:01.223 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:01.483 /dev/nbd0 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:01.483 1+0 records in 00:31:01.483 1+0 records out 00:31:01.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448965 s, 9.1 MB/s 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:01.483 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:31:01.742 /dev/nbd1 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:01.742 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:01.742 1+0 records in 00:31:01.742 1+0 records out 00:31:01.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420568 s, 9.7 MB/s 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:01.743 08:57:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:02.001 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:02.001 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:02.001 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:02.001 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:02.001 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:02.001 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:02.001 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:02.260 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 147851 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 147851 ']' 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 147851 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147851 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147851' 00:31:02.519 killing process with pid 147851 00:31:02.519 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 147851 00:31:02.520 Received shutdown signal, test time was about 14.305770 seconds 00:31:02.520 00:31:02.520 Latency(us) 00:31:02.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.520 =================================================================================================================== 00:31:02.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.520 [2024-07-12 08:57:37.541651] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:02.520 08:57:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 147851 00:31:02.779 [2024-07-12 08:57:37.723687] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:03.716 ************************************ 00:31:03.716 END TEST raid_rebuild_test_io 00:31:03.716 ************************************ 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:03.716 00:31:03.716 real 0m20.246s 00:31:03.716 user 0m31.499s 00:31:03.716 sys 0m2.126s 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.716 08:57:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:03.716 08:57:38 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:31:03.716 08:57:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:03.716 08:57:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:03.716 08:57:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:03.716 ************************************ 00:31:03.716 START TEST raid_rebuild_test_sb_io 00:31:03.716 ************************************ 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=148376 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 148376 /var/tmp/spdk-raid.sock 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 148376 ']' 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:03.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:03.716 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:03.717 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:03.717 08:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.975 [2024-07-12 08:57:38.933854] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:31:03.975 [2024-07-12 08:57:38.934247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148376 ] 00:31:03.975 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:03.975 Zero copy mechanism will not be used. 00:31:03.975 [2024-07-12 08:57:39.093127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.234 [2024-07-12 08:57:39.299079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.492 [2024-07-12 08:57:39.482478] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:04.751 08:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:04.751 08:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:31:04.751 08:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:04.751 08:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:05.009 BaseBdev1_malloc 00:31:05.009 08:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:05.268 [2024-07-12 08:57:40.422010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:05.268 [2024-07-12 08:57:40.422444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.268 [2024-07-12 08:57:40.422633] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:31:05.268 [2024-07-12 08:57:40.422751] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.268 [2024-07-12 08:57:40.425275] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.268 [2024-07-12 08:57:40.425459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:05.268 BaseBdev1 00:31:05.268 08:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:05.268 08:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:05.527 BaseBdev2_malloc 00:31:05.785 08:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:05.785 [2024-07-12 08:57:40.944231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:05.785 [2024-07-12 08:57:40.944720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.785 [2024-07-12 08:57:40.944907] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:31:05.785 [2024-07-12 08:57:40.945028] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.785 [2024-07-12 08:57:40.947532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.785 [2024-07-12 08:57:40.947708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:05.785 BaseBdev2 00:31:05.785 08:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:06.044 spare_malloc 00:31:06.044 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:06.303 spare_delay 00:31:06.303 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:06.561 [2024-07-12 08:57:41.724375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:06.561 [2024-07-12 08:57:41.724813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.561 [2024-07-12 08:57:41.724969] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:31:06.561 [2024-07-12 08:57:41.725105] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.561 [2024-07-12 08:57:41.727702] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.561 [2024-07-12 08:57:41.727884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:06.562 spare 00:31:06.562 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:06.820 [2024-07-12 08:57:41.936523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:06.820 [2024-07-12 08:57:41.938797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:06.820 [2024-07-12 08:57:41.939194] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:31:06.820 [2024-07-12 08:57:41.939321] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:06.820 [2024-07-12 08:57:41.939512] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:31:06.820 [2024-07-12 08:57:41.940045] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:31:06.820 [2024-07-12 08:57:41.940205] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:31:06.820 [2024-07-12 08:57:41.940547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:06.820 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:06.821 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:06.821 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:06.821 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.821 08:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.080 08:57:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:07.080 "name": "raid_bdev1", 00:31:07.080 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:07.080 "strip_size_kb": 0, 00:31:07.080 "state": "online", 00:31:07.080 "raid_level": "raid1", 00:31:07.080 "superblock": true, 00:31:07.080 "num_base_bdevs": 2, 00:31:07.080 "num_base_bdevs_discovered": 2, 00:31:07.080 "num_base_bdevs_operational": 2, 00:31:07.080 "base_bdevs_list": [ 00:31:07.080 { 00:31:07.080 "name": "BaseBdev1", 00:31:07.080 "uuid": "218bdfe1-74aa-55b7-8e19-6d1ff087c9c0", 00:31:07.080 "is_configured": true, 00:31:07.080 "data_offset": 2048, 00:31:07.080 "data_size": 63488 00:31:07.080 }, 00:31:07.080 { 00:31:07.080 "name": "BaseBdev2", 00:31:07.080 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:07.080 "is_configured": true, 00:31:07.080 "data_offset": 2048, 00:31:07.080 "data_size": 63488 00:31:07.080 } 00:31:07.080 ] 00:31:07.080 }' 00:31:07.080 08:57:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:07.080 08:57:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.018 08:57:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:08.018 08:57:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:08.018 [2024-07-12 08:57:43.109191] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:08.018 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:31:08.018 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.018 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:08.276 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:31:08.276 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:31:08.276 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:08.276 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:31:08.534 [2024-07-12 08:57:43.532327] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:08.534 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:08.534 Zero copy mechanism will not be used. 00:31:08.534 Running I/O for 60 seconds... 00:31:08.534 [2024-07-12 08:57:43.673882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:08.534 [2024-07-12 08:57:43.681083] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.534 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.793 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:08.793 "name": "raid_bdev1", 00:31:08.793 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:08.793 "strip_size_kb": 0, 00:31:08.793 "state": "online", 00:31:08.793 "raid_level": "raid1", 00:31:08.793 "superblock": true, 00:31:08.793 "num_base_bdevs": 2, 00:31:08.793 "num_base_bdevs_discovered": 1, 00:31:08.793 "num_base_bdevs_operational": 1, 00:31:08.793 "base_bdevs_list": [ 00:31:08.793 { 00:31:08.793 "name": null, 00:31:08.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.793 "is_configured": false, 00:31:08.793 "data_offset": 2048, 00:31:08.793 "data_size": 63488 00:31:08.793 }, 00:31:08.793 { 00:31:08.793 "name": "BaseBdev2", 00:31:08.793 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:08.793 "is_configured": true, 00:31:08.793 "data_offset": 2048, 00:31:08.793 "data_size": 63488 00:31:08.793 } 00:31:08.793 ] 00:31:08.793 }' 00:31:08.793 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:08.793 08:57:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.730 08:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:09.730 [2024-07-12 08:57:44.858868] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:09.730 08:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:09.730 [2024-07-12 08:57:44.914890] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:09.730 [2024-07-12 08:57:44.917246] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:09.988 [2024-07-12 08:57:45.041820] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:09.988 [2024-07-12 08:57:45.042731] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:10.246 [2024-07-12 08:57:45.267016] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:10.246 [2024-07-12 08:57:45.267681] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:10.505 [2024-07-12 08:57:45.601029] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:10.764 [2024-07-12 08:57:45.718302] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:10.764 [2024-07-12 08:57:45.718943] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:10.764 08:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:10.764 08:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:10.764 08:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:10.764 08:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:10.764 08:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:10.764 08:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.764 08:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.023 [2024-07-12 08:57:46.048048] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:11.023 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:11.023 "name": "raid_bdev1", 00:31:11.023 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:11.023 "strip_size_kb": 0, 00:31:11.023 "state": "online", 00:31:11.023 "raid_level": "raid1", 00:31:11.023 "superblock": true, 00:31:11.023 "num_base_bdevs": 2, 00:31:11.023 "num_base_bdevs_discovered": 2, 00:31:11.023 "num_base_bdevs_operational": 2, 00:31:11.023 "process": { 00:31:11.023 "type": "rebuild", 00:31:11.023 "target": "spare", 00:31:11.023 "progress": { 00:31:11.023 "blocks": 14336, 00:31:11.023 "percent": 22 00:31:11.023 } 00:31:11.023 }, 00:31:11.023 "base_bdevs_list": [ 00:31:11.023 { 00:31:11.023 "name": "spare", 00:31:11.023 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:11.023 "is_configured": true, 00:31:11.023 "data_offset": 2048, 00:31:11.023 "data_size": 63488 00:31:11.023 }, 00:31:11.023 { 00:31:11.023 "name": "BaseBdev2", 00:31:11.023 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:11.023 "is_configured": true, 00:31:11.023 "data_offset": 2048, 00:31:11.023 "data_size": 63488 00:31:11.023 } 00:31:11.023 ] 00:31:11.023 }' 00:31:11.023 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:11.281 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:11.281 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:11.281 [2024-07-12 08:57:46.269505] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:11.281 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:11.281 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:11.540 [2024-07-12 08:57:46.507292] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:11.540 [2024-07-12 08:57:46.526281] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:11.540 [2024-07-12 08:57:46.633961] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:11.540 [2024-07-12 08:57:46.641645] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:11.540 [2024-07-12 08:57:46.651744] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.540 [2024-07-12 08:57:46.651984] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:11.540 [2024-07-12 08:57:46.652030] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:11.540 [2024-07-12 08:57:46.680943] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.540 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.821 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.821 "name": "raid_bdev1", 00:31:11.821 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:11.821 "strip_size_kb": 0, 00:31:11.821 "state": "online", 00:31:11.821 "raid_level": "raid1", 00:31:11.821 "superblock": true, 00:31:11.821 "num_base_bdevs": 2, 00:31:11.821 "num_base_bdevs_discovered": 1, 00:31:11.821 "num_base_bdevs_operational": 1, 00:31:11.821 "base_bdevs_list": [ 00:31:11.821 { 00:31:11.821 "name": null, 00:31:11.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.821 "is_configured": false, 00:31:11.821 "data_offset": 2048, 00:31:11.821 "data_size": 63488 00:31:11.821 }, 00:31:11.821 { 00:31:11.821 "name": "BaseBdev2", 00:31:11.821 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:11.821 "is_configured": true, 00:31:11.821 "data_offset": 2048, 00:31:11.821 "data_size": 63488 00:31:11.821 } 00:31:11.821 ] 00:31:11.821 }' 00:31:11.821 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.821 08:57:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:12.772 08:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:12.772 08:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:12.772 08:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:12.772 08:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:12.772 08:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:12.772 08:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.772 08:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.030 08:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:13.030 "name": "raid_bdev1", 00:31:13.030 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:13.030 "strip_size_kb": 0, 00:31:13.030 "state": "online", 00:31:13.030 "raid_level": "raid1", 00:31:13.030 "superblock": true, 00:31:13.030 "num_base_bdevs": 2, 00:31:13.030 "num_base_bdevs_discovered": 1, 00:31:13.030 "num_base_bdevs_operational": 1, 00:31:13.030 "base_bdevs_list": [ 00:31:13.030 { 00:31:13.030 "name": null, 00:31:13.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.030 "is_configured": false, 00:31:13.030 "data_offset": 2048, 00:31:13.030 "data_size": 63488 00:31:13.030 }, 00:31:13.030 { 00:31:13.031 "name": "BaseBdev2", 00:31:13.031 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:13.031 "is_configured": true, 00:31:13.031 "data_offset": 2048, 00:31:13.031 "data_size": 63488 00:31:13.031 } 00:31:13.031 ] 00:31:13.031 }' 00:31:13.031 08:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:13.031 08:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:13.031 08:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:13.031 08:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:13.031 08:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:13.289 [2024-07-12 08:57:48.427777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:13.289 08:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:13.289 [2024-07-12 08:57:48.474583] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:13.289 [2024-07-12 08:57:48.476951] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:13.547 [2024-07-12 08:57:48.613258] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:13.805 [2024-07-12 08:57:48.824644] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:13.805 [2024-07-12 08:57:48.825322] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:14.062 [2024-07-12 08:57:49.192989] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:14.062 [2024-07-12 08:57:49.200500] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:14.320 [2024-07-12 08:57:49.418251] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:14.320 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:14.320 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:14.320 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:14.320 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:14.320 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:14.320 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.320 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.579 [2024-07-12 08:57:49.651403] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:14.579 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:14.579 "name": "raid_bdev1", 00:31:14.579 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:14.579 "strip_size_kb": 0, 00:31:14.579 "state": "online", 00:31:14.579 "raid_level": "raid1", 00:31:14.579 "superblock": true, 00:31:14.579 "num_base_bdevs": 2, 00:31:14.579 "num_base_bdevs_discovered": 2, 00:31:14.579 "num_base_bdevs_operational": 2, 00:31:14.579 "process": { 00:31:14.579 "type": "rebuild", 00:31:14.579 "target": "spare", 00:31:14.579 "progress": { 00:31:14.579 "blocks": 14336, 00:31:14.579 "percent": 22 00:31:14.579 } 00:31:14.579 }, 00:31:14.579 "base_bdevs_list": [ 00:31:14.579 { 00:31:14.579 "name": "spare", 00:31:14.579 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:14.579 "is_configured": true, 00:31:14.579 "data_offset": 2048, 00:31:14.579 "data_size": 63488 00:31:14.579 }, 00:31:14.579 { 00:31:14.579 "name": "BaseBdev2", 00:31:14.579 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:14.579 "is_configured": true, 00:31:14.579 "data_offset": 2048, 00:31:14.579 "data_size": 63488 00:31:14.579 } 00:31:14.579 ] 00:31:14.579 }' 00:31:14.579 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:14.579 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:14.579 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:14.838 [2024-07-12 08:57:49.783643] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:14.838 [2024-07-12 08:57:49.799343] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:14.838 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=968 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.838 08:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.097 08:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:15.097 "name": "raid_bdev1", 00:31:15.097 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:15.097 "strip_size_kb": 0, 00:31:15.097 "state": "online", 00:31:15.097 "raid_level": "raid1", 00:31:15.097 "superblock": true, 00:31:15.097 "num_base_bdevs": 2, 00:31:15.097 "num_base_bdevs_discovered": 2, 00:31:15.097 "num_base_bdevs_operational": 2, 00:31:15.097 "process": { 00:31:15.097 "type": "rebuild", 00:31:15.097 "target": "spare", 00:31:15.097 "progress": { 00:31:15.097 "blocks": 18432, 00:31:15.097 "percent": 29 00:31:15.097 } 00:31:15.097 }, 00:31:15.097 "base_bdevs_list": [ 00:31:15.097 { 00:31:15.097 "name": "spare", 00:31:15.097 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:15.097 "is_configured": true, 00:31:15.097 "data_offset": 2048, 00:31:15.097 "data_size": 63488 00:31:15.097 }, 00:31:15.097 { 00:31:15.097 "name": "BaseBdev2", 00:31:15.097 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:15.097 "is_configured": true, 00:31:15.097 "data_offset": 2048, 00:31:15.097 "data_size": 63488 00:31:15.097 } 00:31:15.097 ] 00:31:15.097 }' 00:31:15.097 08:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:15.097 08:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:15.097 08:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:15.097 08:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:15.097 08:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:15.663 [2024-07-12 08:57:50.643182] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:15.663 [2024-07-12 08:57:50.643839] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:15.921 [2024-07-12 08:57:50.995772] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.179 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.437 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:16.437 "name": "raid_bdev1", 00:31:16.437 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:16.438 "strip_size_kb": 0, 00:31:16.438 "state": "online", 00:31:16.438 "raid_level": "raid1", 00:31:16.438 "superblock": true, 00:31:16.438 "num_base_bdevs": 2, 00:31:16.438 "num_base_bdevs_discovered": 2, 00:31:16.438 "num_base_bdevs_operational": 2, 00:31:16.438 "process": { 00:31:16.438 "type": "rebuild", 00:31:16.438 "target": "spare", 00:31:16.438 "progress": { 00:31:16.438 "blocks": 38912, 00:31:16.438 "percent": 61 00:31:16.438 } 00:31:16.438 }, 00:31:16.438 "base_bdevs_list": [ 00:31:16.438 { 00:31:16.438 "name": "spare", 00:31:16.438 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:16.438 "is_configured": true, 00:31:16.438 "data_offset": 2048, 00:31:16.438 "data_size": 63488 00:31:16.438 }, 00:31:16.438 { 00:31:16.438 "name": "BaseBdev2", 00:31:16.438 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:16.438 "is_configured": true, 00:31:16.438 "data_offset": 2048, 00:31:16.438 "data_size": 63488 00:31:16.438 } 00:31:16.438 ] 00:31:16.438 }' 00:31:16.438 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:16.438 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:16.438 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:16.438 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:16.438 08:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:17.005 [2024-07-12 08:57:52.007500] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.572 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.572 [2024-07-12 08:57:52.673767] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:17.830 [2024-07-12 08:57:52.776965] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:17.830 [2024-07-12 08:57:52.779652] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:17.830 "name": "raid_bdev1", 00:31:17.830 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:17.830 "strip_size_kb": 0, 00:31:17.830 "state": "online", 00:31:17.830 "raid_level": "raid1", 00:31:17.830 "superblock": true, 00:31:17.830 "num_base_bdevs": 2, 00:31:17.830 "num_base_bdevs_discovered": 2, 00:31:17.830 "num_base_bdevs_operational": 2, 00:31:17.830 "base_bdevs_list": [ 00:31:17.830 { 00:31:17.830 "name": "spare", 00:31:17.830 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:17.830 "is_configured": true, 00:31:17.830 "data_offset": 2048, 00:31:17.830 "data_size": 63488 00:31:17.830 }, 00:31:17.830 { 00:31:17.830 "name": "BaseBdev2", 00:31:17.830 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:17.830 "is_configured": true, 00:31:17.830 "data_offset": 2048, 00:31:17.830 "data_size": 63488 00:31:17.830 } 00:31:17.830 ] 00:31:17.830 }' 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.830 08:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.089 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:18.089 "name": "raid_bdev1", 00:31:18.089 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:18.089 "strip_size_kb": 0, 00:31:18.089 "state": "online", 00:31:18.089 "raid_level": "raid1", 00:31:18.089 "superblock": true, 00:31:18.089 "num_base_bdevs": 2, 00:31:18.089 "num_base_bdevs_discovered": 2, 00:31:18.089 "num_base_bdevs_operational": 2, 00:31:18.089 "base_bdevs_list": [ 00:31:18.089 { 00:31:18.089 "name": "spare", 00:31:18.089 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:18.089 "is_configured": true, 00:31:18.089 "data_offset": 2048, 00:31:18.089 "data_size": 63488 00:31:18.089 }, 00:31:18.089 { 00:31:18.089 "name": "BaseBdev2", 00:31:18.089 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:18.089 "is_configured": true, 00:31:18.089 "data_offset": 2048, 00:31:18.089 "data_size": 63488 00:31:18.089 } 00:31:18.089 ] 00:31:18.089 }' 00:31:18.089 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:18.089 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:18.089 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:18.347 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.348 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.605 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:18.605 "name": "raid_bdev1", 00:31:18.605 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:18.605 "strip_size_kb": 0, 00:31:18.605 "state": "online", 00:31:18.605 "raid_level": "raid1", 00:31:18.605 "superblock": true, 00:31:18.605 "num_base_bdevs": 2, 00:31:18.605 "num_base_bdevs_discovered": 2, 00:31:18.605 "num_base_bdevs_operational": 2, 00:31:18.605 "base_bdevs_list": [ 00:31:18.605 { 00:31:18.605 "name": "spare", 00:31:18.605 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:18.605 "is_configured": true, 00:31:18.605 "data_offset": 2048, 00:31:18.605 "data_size": 63488 00:31:18.605 }, 00:31:18.605 { 00:31:18.605 "name": "BaseBdev2", 00:31:18.605 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:18.605 "is_configured": true, 00:31:18.605 "data_offset": 2048, 00:31:18.605 "data_size": 63488 00:31:18.605 } 00:31:18.605 ] 00:31:18.605 }' 00:31:18.605 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:18.605 08:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:19.171 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:19.430 [2024-07-12 08:57:54.522398] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:19.430 [2024-07-12 08:57:54.522657] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:19.430 00:31:19.430 Latency(us) 00:31:19.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.430 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:19.430 raid_bdev1 : 11.09 118.96 356.89 0.00 0.00 11458.14 348.16 117249.86 00:31:19.430 =================================================================================================================== 00:31:19.430 Total : 118.96 356.89 0.00 0.00 11458.14 348.16 117249.86 00:31:19.688 [2024-07-12 08:57:54.640574] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.688 [2024-07-12 08:57:54.640824] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:19.688 [2024-07-12 08:57:54.640954] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:19.688 0 00:31:19.688 [2024-07-12 08:57:54.641110] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:31:19.688 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.688 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:19.947 08:57:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:20.205 /dev/nbd0 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:20.205 1+0 records in 00:31:20.205 1+0 records out 00:31:20.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465618 s, 8.8 MB/s 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:20.205 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:31:20.463 /dev/nbd1 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:20.464 1+0 records in 00:31:20.464 1+0 records out 00:31:20.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611197 s, 6.7 MB/s 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:20.464 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:20.722 08:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:21.289 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:21.548 [2024-07-12 08:57:56.615130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:21.548 [2024-07-12 08:57:56.615523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.548 [2024-07-12 08:57:56.615703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:21.548 [2024-07-12 08:57:56.615838] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.548 [2024-07-12 08:57:56.618535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.548 [2024-07-12 08:57:56.618736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:21.548 [2024-07-12 08:57:56.618986] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:21.548 [2024-07-12 08:57:56.619158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:21.548 [2024-07-12 08:57:56.619474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:21.548 spare 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.548 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:21.548 [2024-07-12 08:57:56.719765] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:31:21.548 [2024-07-12 08:57:56.720034] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:21.548 [2024-07-12 08:57:56.720259] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c5f0 00:31:21.548 [2024-07-12 08:57:56.720864] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:31:21.548 [2024-07-12 08:57:56.721023] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:31:21.548 [2024-07-12 08:57:56.721286] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:21.806 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:21.806 "name": "raid_bdev1", 00:31:21.806 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:21.806 "strip_size_kb": 0, 00:31:21.806 "state": "online", 00:31:21.806 "raid_level": "raid1", 00:31:21.806 "superblock": true, 00:31:21.806 "num_base_bdevs": 2, 00:31:21.806 "num_base_bdevs_discovered": 2, 00:31:21.806 "num_base_bdevs_operational": 2, 00:31:21.806 "base_bdevs_list": [ 00:31:21.806 { 00:31:21.806 "name": "spare", 00:31:21.806 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:21.806 "is_configured": true, 00:31:21.806 "data_offset": 2048, 00:31:21.806 "data_size": 63488 00:31:21.806 }, 00:31:21.806 { 00:31:21.806 "name": "BaseBdev2", 00:31:21.806 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:21.806 "is_configured": true, 00:31:21.806 "data_offset": 2048, 00:31:21.806 "data_size": 63488 00:31:21.806 } 00:31:21.806 ] 00:31:21.806 }' 00:31:21.806 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:21.806 08:57:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.741 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:22.741 "name": "raid_bdev1", 00:31:22.742 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:22.742 "strip_size_kb": 0, 00:31:22.742 "state": "online", 00:31:22.742 "raid_level": "raid1", 00:31:22.742 "superblock": true, 00:31:22.742 "num_base_bdevs": 2, 00:31:22.742 "num_base_bdevs_discovered": 2, 00:31:22.742 "num_base_bdevs_operational": 2, 00:31:22.742 "base_bdevs_list": [ 00:31:22.742 { 00:31:22.742 "name": "spare", 00:31:22.742 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:22.742 "is_configured": true, 00:31:22.742 "data_offset": 2048, 00:31:22.742 "data_size": 63488 00:31:22.742 }, 00:31:22.742 { 00:31:22.742 "name": "BaseBdev2", 00:31:22.742 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:22.742 "is_configured": true, 00:31:22.742 "data_offset": 2048, 00:31:22.742 "data_size": 63488 00:31:22.742 } 00:31:22.742 ] 00:31:22.742 }' 00:31:22.742 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:22.742 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:22.742 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.000 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:23.000 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.000 08:57:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:23.000 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:23.000 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:23.259 [2024-07-12 08:57:58.436222] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.259 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.518 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.518 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.518 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:23.518 "name": "raid_bdev1", 00:31:23.518 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:23.518 "strip_size_kb": 0, 00:31:23.518 "state": "online", 00:31:23.518 "raid_level": "raid1", 00:31:23.518 "superblock": true, 00:31:23.518 "num_base_bdevs": 2, 00:31:23.518 "num_base_bdevs_discovered": 1, 00:31:23.518 "num_base_bdevs_operational": 1, 00:31:23.518 "base_bdevs_list": [ 00:31:23.518 { 00:31:23.518 "name": null, 00:31:23.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.518 "is_configured": false, 00:31:23.518 "data_offset": 2048, 00:31:23.518 "data_size": 63488 00:31:23.518 }, 00:31:23.518 { 00:31:23.518 "name": "BaseBdev2", 00:31:23.518 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:23.518 "is_configured": true, 00:31:23.518 "data_offset": 2048, 00:31:23.518 "data_size": 63488 00:31:23.518 } 00:31:23.518 ] 00:31:23.518 }' 00:31:23.518 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:23.518 08:57:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:24.453 08:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:24.453 [2024-07-12 08:57:59.608716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:24.453 [2024-07-12 08:57:59.609244] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:24.453 [2024-07-12 08:57:59.609369] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:24.453 [2024-07-12 08:57:59.609500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:24.453 [2024-07-12 08:57:59.622981] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c790 00:31:24.453 [2024-07-12 08:57:59.625282] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:24.453 08:57:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:25.830 "name": "raid_bdev1", 00:31:25.830 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:25.830 "strip_size_kb": 0, 00:31:25.830 "state": "online", 00:31:25.830 "raid_level": "raid1", 00:31:25.830 "superblock": true, 00:31:25.830 "num_base_bdevs": 2, 00:31:25.830 "num_base_bdevs_discovered": 2, 00:31:25.830 "num_base_bdevs_operational": 2, 00:31:25.830 "process": { 00:31:25.830 "type": "rebuild", 00:31:25.830 "target": "spare", 00:31:25.830 "progress": { 00:31:25.830 "blocks": 24576, 00:31:25.830 "percent": 38 00:31:25.830 } 00:31:25.830 }, 00:31:25.830 "base_bdevs_list": [ 00:31:25.830 { 00:31:25.830 "name": "spare", 00:31:25.830 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:25.830 "is_configured": true, 00:31:25.830 "data_offset": 2048, 00:31:25.830 "data_size": 63488 00:31:25.830 }, 00:31:25.830 { 00:31:25.830 "name": "BaseBdev2", 00:31:25.830 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:25.830 "is_configured": true, 00:31:25.830 "data_offset": 2048, 00:31:25.830 "data_size": 63488 00:31:25.830 } 00:31:25.830 ] 00:31:25.830 }' 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.830 08:58:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:25.830 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.830 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:26.089 [2024-07-12 08:58:01.251165] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:26.347 [2024-07-12 08:58:01.336873] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:26.348 [2024-07-12 08:58:01.337318] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:26.348 [2024-07-12 08:58:01.337445] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:26.348 [2024-07-12 08:58:01.337486] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.348 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.606 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:26.606 "name": "raid_bdev1", 00:31:26.606 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:26.606 "strip_size_kb": 0, 00:31:26.606 "state": "online", 00:31:26.606 "raid_level": "raid1", 00:31:26.606 "superblock": true, 00:31:26.606 "num_base_bdevs": 2, 00:31:26.606 "num_base_bdevs_discovered": 1, 00:31:26.606 "num_base_bdevs_operational": 1, 00:31:26.606 "base_bdevs_list": [ 00:31:26.606 { 00:31:26.606 "name": null, 00:31:26.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.606 "is_configured": false, 00:31:26.606 "data_offset": 2048, 00:31:26.606 "data_size": 63488 00:31:26.606 }, 00:31:26.606 { 00:31:26.606 "name": "BaseBdev2", 00:31:26.606 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:26.606 "is_configured": true, 00:31:26.606 "data_offset": 2048, 00:31:26.606 "data_size": 63488 00:31:26.606 } 00:31:26.606 ] 00:31:26.606 }' 00:31:26.606 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:26.606 08:58:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:27.173 08:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:27.432 [2024-07-12 08:58:02.539282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:27.432 [2024-07-12 08:58:02.539699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.432 [2024-07-12 08:58:02.539776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:31:27.432 [2024-07-12 08:58:02.540055] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.432 [2024-07-12 08:58:02.540717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.432 [2024-07-12 08:58:02.540890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:27.432 [2024-07-12 08:58:02.541127] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:27.432 [2024-07-12 08:58:02.541245] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:27.432 [2024-07-12 08:58:02.541343] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:27.432 [2024-07-12 08:58:02.541479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:27.432 [2024-07-12 08:58:02.555271] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cad0 00:31:27.432 spare 00:31:27.432 [2024-07-12 08:58:02.557574] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:27.432 08:58:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:28.441 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:28.441 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:28.441 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:28.441 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:28.441 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:28.441 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.441 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.724 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:28.724 "name": "raid_bdev1", 00:31:28.724 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:28.724 "strip_size_kb": 0, 00:31:28.724 "state": "online", 00:31:28.724 "raid_level": "raid1", 00:31:28.724 "superblock": true, 00:31:28.724 "num_base_bdevs": 2, 00:31:28.724 "num_base_bdevs_discovered": 2, 00:31:28.724 "num_base_bdevs_operational": 2, 00:31:28.724 "process": { 00:31:28.724 "type": "rebuild", 00:31:28.724 "target": "spare", 00:31:28.724 "progress": { 00:31:28.724 "blocks": 24576, 00:31:28.724 "percent": 38 00:31:28.724 } 00:31:28.724 }, 00:31:28.724 "base_bdevs_list": [ 00:31:28.724 { 00:31:28.724 "name": "spare", 00:31:28.724 "uuid": "9b547945-6e92-5533-a4e9-8b0706a289e4", 00:31:28.724 "is_configured": true, 00:31:28.724 "data_offset": 2048, 00:31:28.724 "data_size": 63488 00:31:28.724 }, 00:31:28.724 { 00:31:28.724 "name": "BaseBdev2", 00:31:28.724 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:28.724 "is_configured": true, 00:31:28.724 "data_offset": 2048, 00:31:28.724 "data_size": 63488 00:31:28.724 } 00:31:28.724 ] 00:31:28.724 }' 00:31:28.724 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:28.724 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:28.724 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:28.983 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:28.983 08:58:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:28.983 [2024-07-12 08:58:04.171426] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:29.243 [2024-07-12 08:58:04.269162] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:29.243 [2024-07-12 08:58:04.269542] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:29.243 [2024-07-12 08:58:04.269671] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:29.243 [2024-07-12 08:58:04.269713] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.243 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.502 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:29.502 "name": "raid_bdev1", 00:31:29.502 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:29.502 "strip_size_kb": 0, 00:31:29.502 "state": "online", 00:31:29.502 "raid_level": "raid1", 00:31:29.502 "superblock": true, 00:31:29.502 "num_base_bdevs": 2, 00:31:29.502 "num_base_bdevs_discovered": 1, 00:31:29.502 "num_base_bdevs_operational": 1, 00:31:29.502 "base_bdevs_list": [ 00:31:29.502 { 00:31:29.502 "name": null, 00:31:29.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.502 "is_configured": false, 00:31:29.502 "data_offset": 2048, 00:31:29.502 "data_size": 63488 00:31:29.502 }, 00:31:29.502 { 00:31:29.502 "name": "BaseBdev2", 00:31:29.502 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:29.503 "is_configured": true, 00:31:29.503 "data_offset": 2048, 00:31:29.503 "data_size": 63488 00:31:29.503 } 00:31:29.503 ] 00:31:29.503 }' 00:31:29.503 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:29.503 08:58:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:30.439 "name": "raid_bdev1", 00:31:30.439 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:30.439 "strip_size_kb": 0, 00:31:30.439 "state": "online", 00:31:30.439 "raid_level": "raid1", 00:31:30.439 "superblock": true, 00:31:30.439 "num_base_bdevs": 2, 00:31:30.439 "num_base_bdevs_discovered": 1, 00:31:30.439 "num_base_bdevs_operational": 1, 00:31:30.439 "base_bdevs_list": [ 00:31:30.439 { 00:31:30.439 "name": null, 00:31:30.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.439 "is_configured": false, 00:31:30.439 "data_offset": 2048, 00:31:30.439 "data_size": 63488 00:31:30.439 }, 00:31:30.439 { 00:31:30.439 "name": "BaseBdev2", 00:31:30.439 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:30.439 "is_configured": true, 00:31:30.439 "data_offset": 2048, 00:31:30.439 "data_size": 63488 00:31:30.439 } 00:31:30.439 ] 00:31:30.439 }' 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:30.439 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:31.006 08:58:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:31.006 [2024-07-12 08:58:06.173136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:31.006 [2024-07-12 08:58:06.173514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.006 [2024-07-12 08:58:06.173679] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:31:31.006 [2024-07-12 08:58:06.173801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.006 [2024-07-12 08:58:06.174403] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.006 [2024-07-12 08:58:06.174600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:31.006 [2024-07-12 08:58:06.174830] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:31.006 [2024-07-12 08:58:06.174938] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:31.006 [2024-07-12 08:58:06.175034] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:31.006 BaseBdev1 00:31:31.006 08:58:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.384 "name": "raid_bdev1", 00:31:32.384 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:32.384 "strip_size_kb": 0, 00:31:32.384 "state": "online", 00:31:32.384 "raid_level": "raid1", 00:31:32.384 "superblock": true, 00:31:32.384 "num_base_bdevs": 2, 00:31:32.384 "num_base_bdevs_discovered": 1, 00:31:32.384 "num_base_bdevs_operational": 1, 00:31:32.384 "base_bdevs_list": [ 00:31:32.384 { 00:31:32.384 "name": null, 00:31:32.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.384 "is_configured": false, 00:31:32.384 "data_offset": 2048, 00:31:32.384 "data_size": 63488 00:31:32.384 }, 00:31:32.384 { 00:31:32.384 "name": "BaseBdev2", 00:31:32.384 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:32.384 "is_configured": true, 00:31:32.384 "data_offset": 2048, 00:31:32.384 "data_size": 63488 00:31:32.384 } 00:31:32.384 ] 00:31:32.384 }' 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.384 08:58:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:33.321 "name": "raid_bdev1", 00:31:33.321 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:33.321 "strip_size_kb": 0, 00:31:33.321 "state": "online", 00:31:33.321 "raid_level": "raid1", 00:31:33.321 "superblock": true, 00:31:33.321 "num_base_bdevs": 2, 00:31:33.321 "num_base_bdevs_discovered": 1, 00:31:33.321 "num_base_bdevs_operational": 1, 00:31:33.321 "base_bdevs_list": [ 00:31:33.321 { 00:31:33.321 "name": null, 00:31:33.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.321 "is_configured": false, 00:31:33.321 "data_offset": 2048, 00:31:33.321 "data_size": 63488 00:31:33.321 }, 00:31:33.321 { 00:31:33.321 "name": "BaseBdev2", 00:31:33.321 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:33.321 "is_configured": true, 00:31:33.321 "data_offset": 2048, 00:31:33.321 "data_size": 63488 00:31:33.321 } 00:31:33.321 ] 00:31:33.321 }' 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:33.321 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:33.580 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:33.840 [2024-07-12 08:58:08.826040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:33.840 [2024-07-12 08:58:08.826521] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:33.840 [2024-07-12 08:58:08.826641] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:33.840 request: 00:31:33.840 { 00:31:33.840 "base_bdev": "BaseBdev1", 00:31:33.840 "raid_bdev": "raid_bdev1", 00:31:33.840 "method": "bdev_raid_add_base_bdev", 00:31:33.840 "req_id": 1 00:31:33.840 } 00:31:33.840 Got JSON-RPC error response 00:31:33.840 response: 00:31:33.840 { 00:31:33.840 "code": -22, 00:31:33.840 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:33.840 } 00:31:33.840 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:31:33.840 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:33.840 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:33.840 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:33.840 08:58:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.777 08:58:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.050 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:35.050 "name": "raid_bdev1", 00:31:35.050 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:35.050 "strip_size_kb": 0, 00:31:35.050 "state": "online", 00:31:35.050 "raid_level": "raid1", 00:31:35.050 "superblock": true, 00:31:35.050 "num_base_bdevs": 2, 00:31:35.050 "num_base_bdevs_discovered": 1, 00:31:35.050 "num_base_bdevs_operational": 1, 00:31:35.050 "base_bdevs_list": [ 00:31:35.050 { 00:31:35.050 "name": null, 00:31:35.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.050 "is_configured": false, 00:31:35.050 "data_offset": 2048, 00:31:35.050 "data_size": 63488 00:31:35.050 }, 00:31:35.050 { 00:31:35.050 "name": "BaseBdev2", 00:31:35.050 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:35.050 "is_configured": true, 00:31:35.050 "data_offset": 2048, 00:31:35.050 "data_size": 63488 00:31:35.050 } 00:31:35.050 ] 00:31:35.050 }' 00:31:35.050 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:35.050 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:35.618 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:35.618 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:35.618 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:35.618 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:35.618 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:35.618 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.618 08:58:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.877 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:35.877 "name": "raid_bdev1", 00:31:35.877 "uuid": "fc91c69d-5bc6-4aa1-83ea-0584e4e453c0", 00:31:35.877 "strip_size_kb": 0, 00:31:35.877 "state": "online", 00:31:35.877 "raid_level": "raid1", 00:31:35.877 "superblock": true, 00:31:35.877 "num_base_bdevs": 2, 00:31:35.877 "num_base_bdevs_discovered": 1, 00:31:35.877 "num_base_bdevs_operational": 1, 00:31:35.877 "base_bdevs_list": [ 00:31:35.877 { 00:31:35.877 "name": null, 00:31:35.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.877 "is_configured": false, 00:31:35.877 "data_offset": 2048, 00:31:35.877 "data_size": 63488 00:31:35.877 }, 00:31:35.877 { 00:31:35.877 "name": "BaseBdev2", 00:31:35.877 "uuid": "1261ad41-25cd-5e80-a084-e62678b632e5", 00:31:35.877 "is_configured": true, 00:31:35.877 "data_offset": 2048, 00:31:35.877 "data_size": 63488 00:31:35.877 } 00:31:35.877 ] 00:31:35.877 }' 00:31:35.877 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 148376 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 148376 ']' 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 148376 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148376 00:31:36.141 killing process with pid 148376 00:31:36.141 Received shutdown signal, test time was about 27.636842 seconds 00:31:36.141 00:31:36.141 Latency(us) 00:31:36.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.141 =================================================================================================================== 00:31:36.141 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148376' 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 148376 00:31:36.141 08:58:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 148376 00:31:36.141 [2024-07-12 08:58:11.171820] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:36.141 [2024-07-12 08:58:11.171990] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:36.141 [2024-07-12 08:58:11.172048] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:36.141 [2024-07-12 08:58:11.172059] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:31:36.398 [2024-07-12 08:58:11.354058] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:37.333 ************************************ 00:31:37.333 END TEST raid_rebuild_test_sb_io 00:31:37.333 ************************************ 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:37.333 00:31:37.333 real 0m33.578s 00:31:37.333 user 0m54.600s 00:31:37.333 sys 0m3.464s 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:37.333 08:58:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:37.333 08:58:12 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:31:37.333 08:58:12 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:31:37.333 08:58:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:37.333 08:58:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.333 08:58:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:37.333 ************************************ 00:31:37.333 START TEST raid_rebuild_test 00:31:37.333 ************************************ 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=149317 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 149317 /var/tmp/spdk-raid.sock 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 149317 ']' 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:37.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:37.333 08:58:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.592 [2024-07-12 08:58:12.599480] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:31:37.592 [2024-07-12 08:58:12.599879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149317 ] 00:31:37.592 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:37.592 Zero copy mechanism will not be used. 00:31:37.592 [2024-07-12 08:58:12.772158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.850 [2024-07-12 08:58:12.977165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.108 [2024-07-12 08:58:13.156592] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:38.366 08:58:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:38.366 08:58:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:31:38.366 08:58:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:38.366 08:58:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:38.623 BaseBdev1_malloc 00:31:38.882 08:58:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:38.882 [2024-07-12 08:58:14.039508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:38.882 [2024-07-12 08:58:14.039861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:38.882 [2024-07-12 08:58:14.040023] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:31:38.882 [2024-07-12 08:58:14.040133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:38.882 [2024-07-12 08:58:14.042787] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:38.882 [2024-07-12 08:58:14.042966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:38.882 BaseBdev1 00:31:38.882 08:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:38.882 08:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:39.449 BaseBdev2_malloc 00:31:39.449 08:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:39.449 [2024-07-12 08:58:14.585693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:39.449 [2024-07-12 08:58:14.586171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.449 [2024-07-12 08:58:14.586339] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:31:39.449 [2024-07-12 08:58:14.586455] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.449 [2024-07-12 08:58:14.589057] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.449 [2024-07-12 08:58:14.589233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:39.449 BaseBdev2 00:31:39.449 08:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:39.449 08:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:39.707 BaseBdev3_malloc 00:31:39.707 08:58:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:39.965 [2024-07-12 08:58:15.070075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:39.965 [2024-07-12 08:58:15.070545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.965 [2024-07-12 08:58:15.070698] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:31:39.965 [2024-07-12 08:58:15.070818] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.965 [2024-07-12 08:58:15.073481] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.965 [2024-07-12 08:58:15.073666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:39.965 BaseBdev3 00:31:39.965 08:58:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:39.965 08:58:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:40.223 BaseBdev4_malloc 00:31:40.223 08:58:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:40.482 [2024-07-12 08:58:15.563520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:40.482 [2024-07-12 08:58:15.563907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:40.482 [2024-07-12 08:58:15.564091] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:40.482 [2024-07-12 08:58:15.564225] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:40.482 [2024-07-12 08:58:15.566904] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:40.482 [2024-07-12 08:58:15.567089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:40.482 BaseBdev4 00:31:40.482 08:58:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:40.740 spare_malloc 00:31:40.740 08:58:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:40.998 spare_delay 00:31:40.998 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:41.257 [2024-07-12 08:58:16.320871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:41.257 [2024-07-12 08:58:16.321278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:41.257 [2024-07-12 08:58:16.321434] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:41.257 [2024-07-12 08:58:16.321558] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:41.257 [2024-07-12 08:58:16.324155] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:41.257 [2024-07-12 08:58:16.324369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:41.257 spare 00:31:41.257 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:31:41.516 [2024-07-12 08:58:16.540980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:41.516 [2024-07-12 08:58:16.543329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:41.516 [2024-07-12 08:58:16.543574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:41.516 [2024-07-12 08:58:16.543792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:41.516 [2024-07-12 08:58:16.544036] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:31:41.516 [2024-07-12 08:58:16.544147] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:41.516 [2024-07-12 08:58:16.544390] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:41.516 [2024-07-12 08:58:16.544953] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:31:41.516 [2024-07-12 08:58:16.545091] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:31:41.516 [2024-07-12 08:58:16.545431] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.516 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.775 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:41.775 "name": "raid_bdev1", 00:31:41.775 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:41.775 "strip_size_kb": 0, 00:31:41.775 "state": "online", 00:31:41.775 "raid_level": "raid1", 00:31:41.775 "superblock": false, 00:31:41.775 "num_base_bdevs": 4, 00:31:41.775 "num_base_bdevs_discovered": 4, 00:31:41.775 "num_base_bdevs_operational": 4, 00:31:41.775 "base_bdevs_list": [ 00:31:41.775 { 00:31:41.775 "name": "BaseBdev1", 00:31:41.775 "uuid": "167ed8ad-1f50-5192-8e1b-f5198ae96300", 00:31:41.775 "is_configured": true, 00:31:41.775 "data_offset": 0, 00:31:41.775 "data_size": 65536 00:31:41.775 }, 00:31:41.775 { 00:31:41.775 "name": "BaseBdev2", 00:31:41.775 "uuid": "daf878c7-ba35-562c-8adc-25480b8579c2", 00:31:41.775 "is_configured": true, 00:31:41.775 "data_offset": 0, 00:31:41.775 "data_size": 65536 00:31:41.775 }, 00:31:41.775 { 00:31:41.775 "name": "BaseBdev3", 00:31:41.775 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:41.775 "is_configured": true, 00:31:41.775 "data_offset": 0, 00:31:41.775 "data_size": 65536 00:31:41.775 }, 00:31:41.775 { 00:31:41.775 "name": "BaseBdev4", 00:31:41.775 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:41.775 "is_configured": true, 00:31:41.775 "data_offset": 0, 00:31:41.775 "data_size": 65536 00:31:41.775 } 00:31:41.775 ] 00:31:41.775 }' 00:31:41.775 08:58:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:41.775 08:58:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.342 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:42.342 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:42.601 [2024-07-12 08:58:17.757942] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:42.601 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:31:42.601 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.601 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:42.860 08:58:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:43.119 [2024-07-12 08:58:18.245795] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:43.119 /dev/nbd0 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:43.119 1+0 records in 00:31:43.119 1+0 records out 00:31:43.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445314 s, 9.2 MB/s 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:31:43.119 08:58:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:31:51.233 65536+0 records in 00:31:51.233 65536+0 records out 00:31:51.233 33554432 bytes (34 MB, 32 MiB) copied, 7.35884 s, 4.6 MB/s 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:51.233 [2024-07-12 08:58:25.939235] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.233 08:58:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:51.233 [2024-07-12 08:58:26.146928] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:51.233 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:51.234 "name": "raid_bdev1", 00:31:51.234 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:51.234 "strip_size_kb": 0, 00:31:51.234 "state": "online", 00:31:51.234 "raid_level": "raid1", 00:31:51.234 "superblock": false, 00:31:51.234 "num_base_bdevs": 4, 00:31:51.234 "num_base_bdevs_discovered": 3, 00:31:51.234 "num_base_bdevs_operational": 3, 00:31:51.234 "base_bdevs_list": [ 00:31:51.234 { 00:31:51.234 "name": null, 00:31:51.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.234 "is_configured": false, 00:31:51.234 "data_offset": 0, 00:31:51.234 "data_size": 65536 00:31:51.234 }, 00:31:51.234 { 00:31:51.234 "name": "BaseBdev2", 00:31:51.234 "uuid": "daf878c7-ba35-562c-8adc-25480b8579c2", 00:31:51.234 "is_configured": true, 00:31:51.234 "data_offset": 0, 00:31:51.234 "data_size": 65536 00:31:51.234 }, 00:31:51.234 { 00:31:51.234 "name": "BaseBdev3", 00:31:51.234 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:51.234 "is_configured": true, 00:31:51.234 "data_offset": 0, 00:31:51.234 "data_size": 65536 00:31:51.234 }, 00:31:51.234 { 00:31:51.234 "name": "BaseBdev4", 00:31:51.234 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:51.234 "is_configured": true, 00:31:51.234 "data_offset": 0, 00:31:51.234 "data_size": 65536 00:31:51.234 } 00:31:51.234 ] 00:31:51.234 }' 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:51.234 08:58:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.171 08:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:52.171 [2024-07-12 08:58:27.283162] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:52.171 [2024-07-12 08:58:27.295251] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bc50 00:31:52.171 [2024-07-12 08:58:27.297581] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:52.171 08:58:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.548 "name": "raid_bdev1", 00:31:53.548 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:53.548 "strip_size_kb": 0, 00:31:53.548 "state": "online", 00:31:53.548 "raid_level": "raid1", 00:31:53.548 "superblock": false, 00:31:53.548 "num_base_bdevs": 4, 00:31:53.548 "num_base_bdevs_discovered": 4, 00:31:53.548 "num_base_bdevs_operational": 4, 00:31:53.548 "process": { 00:31:53.548 "type": "rebuild", 00:31:53.548 "target": "spare", 00:31:53.548 "progress": { 00:31:53.548 "blocks": 24576, 00:31:53.548 "percent": 37 00:31:53.548 } 00:31:53.548 }, 00:31:53.548 "base_bdevs_list": [ 00:31:53.548 { 00:31:53.548 "name": "spare", 00:31:53.548 "uuid": "b0750208-3b6c-5a6a-8bf4-bf4dffb62396", 00:31:53.548 "is_configured": true, 00:31:53.548 "data_offset": 0, 00:31:53.548 "data_size": 65536 00:31:53.548 }, 00:31:53.548 { 00:31:53.548 "name": "BaseBdev2", 00:31:53.548 "uuid": "daf878c7-ba35-562c-8adc-25480b8579c2", 00:31:53.548 "is_configured": true, 00:31:53.548 "data_offset": 0, 00:31:53.548 "data_size": 65536 00:31:53.548 }, 00:31:53.548 { 00:31:53.548 "name": "BaseBdev3", 00:31:53.548 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:53.548 "is_configured": true, 00:31:53.548 "data_offset": 0, 00:31:53.548 "data_size": 65536 00:31:53.548 }, 00:31:53.548 { 00:31:53.548 "name": "BaseBdev4", 00:31:53.548 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:53.548 "is_configured": true, 00:31:53.548 "data_offset": 0, 00:31:53.548 "data_size": 65536 00:31:53.548 } 00:31:53.548 ] 00:31:53.548 }' 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:53.548 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:53.806 [2024-07-12 08:58:28.895803] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:53.807 [2024-07-12 08:58:28.908370] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:53.807 [2024-07-12 08:58:28.908641] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:53.807 [2024-07-12 08:58:28.908771] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:53.807 [2024-07-12 08:58:28.908865] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.807 08:58:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.064 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.065 "name": "raid_bdev1", 00:31:54.065 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:54.065 "strip_size_kb": 0, 00:31:54.065 "state": "online", 00:31:54.065 "raid_level": "raid1", 00:31:54.065 "superblock": false, 00:31:54.065 "num_base_bdevs": 4, 00:31:54.065 "num_base_bdevs_discovered": 3, 00:31:54.065 "num_base_bdevs_operational": 3, 00:31:54.065 "base_bdevs_list": [ 00:31:54.065 { 00:31:54.065 "name": null, 00:31:54.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.065 "is_configured": false, 00:31:54.065 "data_offset": 0, 00:31:54.065 "data_size": 65536 00:31:54.065 }, 00:31:54.065 { 00:31:54.065 "name": "BaseBdev2", 00:31:54.065 "uuid": "daf878c7-ba35-562c-8adc-25480b8579c2", 00:31:54.065 "is_configured": true, 00:31:54.065 "data_offset": 0, 00:31:54.065 "data_size": 65536 00:31:54.065 }, 00:31:54.065 { 00:31:54.065 "name": "BaseBdev3", 00:31:54.065 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:54.065 "is_configured": true, 00:31:54.065 "data_offset": 0, 00:31:54.065 "data_size": 65536 00:31:54.065 }, 00:31:54.065 { 00:31:54.065 "name": "BaseBdev4", 00:31:54.065 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:54.065 "is_configured": true, 00:31:54.065 "data_offset": 0, 00:31:54.065 "data_size": 65536 00:31:54.065 } 00:31:54.065 ] 00:31:54.065 }' 00:31:54.065 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.065 08:58:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:54.998 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:54.998 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:54.998 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:54.998 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:54.998 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:54.998 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.998 08:58:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.256 08:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:55.256 "name": "raid_bdev1", 00:31:55.256 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:55.256 "strip_size_kb": 0, 00:31:55.256 "state": "online", 00:31:55.256 "raid_level": "raid1", 00:31:55.256 "superblock": false, 00:31:55.256 "num_base_bdevs": 4, 00:31:55.256 "num_base_bdevs_discovered": 3, 00:31:55.256 "num_base_bdevs_operational": 3, 00:31:55.256 "base_bdevs_list": [ 00:31:55.256 { 00:31:55.256 "name": null, 00:31:55.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.256 "is_configured": false, 00:31:55.256 "data_offset": 0, 00:31:55.256 "data_size": 65536 00:31:55.256 }, 00:31:55.256 { 00:31:55.256 "name": "BaseBdev2", 00:31:55.256 "uuid": "daf878c7-ba35-562c-8adc-25480b8579c2", 00:31:55.256 "is_configured": true, 00:31:55.256 "data_offset": 0, 00:31:55.256 "data_size": 65536 00:31:55.256 }, 00:31:55.256 { 00:31:55.256 "name": "BaseBdev3", 00:31:55.256 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:55.256 "is_configured": true, 00:31:55.256 "data_offset": 0, 00:31:55.256 "data_size": 65536 00:31:55.256 }, 00:31:55.256 { 00:31:55.256 "name": "BaseBdev4", 00:31:55.256 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:55.256 "is_configured": true, 00:31:55.256 "data_offset": 0, 00:31:55.256 "data_size": 65536 00:31:55.256 } 00:31:55.256 ] 00:31:55.256 }' 00:31:55.256 08:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:55.256 08:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:55.256 08:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:55.256 08:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:55.256 08:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:55.514 [2024-07-12 08:58:30.570904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:55.514 [2024-07-12 08:58:30.582413] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bdf0 00:31:55.514 [2024-07-12 08:58:30.584702] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:55.514 08:58:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:56.449 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:56.449 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:56.449 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:56.449 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:56.449 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:56.449 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.449 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.708 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.708 "name": "raid_bdev1", 00:31:56.708 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:56.708 "strip_size_kb": 0, 00:31:56.708 "state": "online", 00:31:56.708 "raid_level": "raid1", 00:31:56.708 "superblock": false, 00:31:56.708 "num_base_bdevs": 4, 00:31:56.708 "num_base_bdevs_discovered": 4, 00:31:56.708 "num_base_bdevs_operational": 4, 00:31:56.708 "process": { 00:31:56.708 "type": "rebuild", 00:31:56.708 "target": "spare", 00:31:56.708 "progress": { 00:31:56.708 "blocks": 24576, 00:31:56.708 "percent": 37 00:31:56.708 } 00:31:56.708 }, 00:31:56.708 "base_bdevs_list": [ 00:31:56.708 { 00:31:56.708 "name": "spare", 00:31:56.708 "uuid": "b0750208-3b6c-5a6a-8bf4-bf4dffb62396", 00:31:56.708 "is_configured": true, 00:31:56.708 "data_offset": 0, 00:31:56.708 "data_size": 65536 00:31:56.708 }, 00:31:56.708 { 00:31:56.708 "name": "BaseBdev2", 00:31:56.708 "uuid": "daf878c7-ba35-562c-8adc-25480b8579c2", 00:31:56.708 "is_configured": true, 00:31:56.708 "data_offset": 0, 00:31:56.708 "data_size": 65536 00:31:56.708 }, 00:31:56.708 { 00:31:56.708 "name": "BaseBdev3", 00:31:56.708 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:56.708 "is_configured": true, 00:31:56.708 "data_offset": 0, 00:31:56.708 "data_size": 65536 00:31:56.708 }, 00:31:56.708 { 00:31:56.708 "name": "BaseBdev4", 00:31:56.708 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:56.708 "is_configured": true, 00:31:56.708 "data_offset": 0, 00:31:56.708 "data_size": 65536 00:31:56.708 } 00:31:56.708 ] 00:31:56.708 }' 00:31:56.708 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:31:56.967 08:58:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:57.226 [2024-07-12 08:58:32.235066] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:57.226 [2024-07-12 08:58:32.296195] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0bdf0 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.226 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.485 "name": "raid_bdev1", 00:31:57.485 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:57.485 "strip_size_kb": 0, 00:31:57.485 "state": "online", 00:31:57.485 "raid_level": "raid1", 00:31:57.485 "superblock": false, 00:31:57.485 "num_base_bdevs": 4, 00:31:57.485 "num_base_bdevs_discovered": 3, 00:31:57.485 "num_base_bdevs_operational": 3, 00:31:57.485 "process": { 00:31:57.485 "type": "rebuild", 00:31:57.485 "target": "spare", 00:31:57.485 "progress": { 00:31:57.485 "blocks": 38912, 00:31:57.485 "percent": 59 00:31:57.485 } 00:31:57.485 }, 00:31:57.485 "base_bdevs_list": [ 00:31:57.485 { 00:31:57.485 "name": "spare", 00:31:57.485 "uuid": "b0750208-3b6c-5a6a-8bf4-bf4dffb62396", 00:31:57.485 "is_configured": true, 00:31:57.485 "data_offset": 0, 00:31:57.485 "data_size": 65536 00:31:57.485 }, 00:31:57.485 { 00:31:57.485 "name": null, 00:31:57.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.485 "is_configured": false, 00:31:57.485 "data_offset": 0, 00:31:57.485 "data_size": 65536 00:31:57.485 }, 00:31:57.485 { 00:31:57.485 "name": "BaseBdev3", 00:31:57.485 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:57.485 "is_configured": true, 00:31:57.485 "data_offset": 0, 00:31:57.485 "data_size": 65536 00:31:57.485 }, 00:31:57.485 { 00:31:57.485 "name": "BaseBdev4", 00:31:57.485 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:57.485 "is_configured": true, 00:31:57.485 "data_offset": 0, 00:31:57.485 "data_size": 65536 00:31:57.485 } 00:31:57.485 ] 00:31:57.485 }' 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1011 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.485 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.051 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:58.051 "name": "raid_bdev1", 00:31:58.051 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:58.051 "strip_size_kb": 0, 00:31:58.051 "state": "online", 00:31:58.051 "raid_level": "raid1", 00:31:58.051 "superblock": false, 00:31:58.051 "num_base_bdevs": 4, 00:31:58.051 "num_base_bdevs_discovered": 3, 00:31:58.051 "num_base_bdevs_operational": 3, 00:31:58.051 "process": { 00:31:58.051 "type": "rebuild", 00:31:58.051 "target": "spare", 00:31:58.051 "progress": { 00:31:58.051 "blocks": 47104, 00:31:58.051 "percent": 71 00:31:58.051 } 00:31:58.051 }, 00:31:58.051 "base_bdevs_list": [ 00:31:58.051 { 00:31:58.051 "name": "spare", 00:31:58.051 "uuid": "b0750208-3b6c-5a6a-8bf4-bf4dffb62396", 00:31:58.051 "is_configured": true, 00:31:58.051 "data_offset": 0, 00:31:58.051 "data_size": 65536 00:31:58.051 }, 00:31:58.051 { 00:31:58.051 "name": null, 00:31:58.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.051 "is_configured": false, 00:31:58.051 "data_offset": 0, 00:31:58.051 "data_size": 65536 00:31:58.051 }, 00:31:58.051 { 00:31:58.051 "name": "BaseBdev3", 00:31:58.051 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:58.051 "is_configured": true, 00:31:58.051 "data_offset": 0, 00:31:58.051 "data_size": 65536 00:31:58.051 }, 00:31:58.051 { 00:31:58.051 "name": "BaseBdev4", 00:31:58.051 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:58.051 "is_configured": true, 00:31:58.051 "data_offset": 0, 00:31:58.051 "data_size": 65536 00:31:58.051 } 00:31:58.051 ] 00:31:58.051 }' 00:31:58.051 08:58:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:58.051 08:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:58.051 08:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:58.051 08:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:58.051 08:58:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:58.616 [2024-07-12 08:58:33.806665] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:58.616 [2024-07-12 08:58:33.806953] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:58.616 [2024-07-12 08:58:33.807176] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.182 "name": "raid_bdev1", 00:31:59.182 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:59.182 "strip_size_kb": 0, 00:31:59.182 "state": "online", 00:31:59.182 "raid_level": "raid1", 00:31:59.182 "superblock": false, 00:31:59.182 "num_base_bdevs": 4, 00:31:59.182 "num_base_bdevs_discovered": 3, 00:31:59.182 "num_base_bdevs_operational": 3, 00:31:59.182 "base_bdevs_list": [ 00:31:59.182 { 00:31:59.182 "name": "spare", 00:31:59.182 "uuid": "b0750208-3b6c-5a6a-8bf4-bf4dffb62396", 00:31:59.182 "is_configured": true, 00:31:59.182 "data_offset": 0, 00:31:59.182 "data_size": 65536 00:31:59.182 }, 00:31:59.182 { 00:31:59.182 "name": null, 00:31:59.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.182 "is_configured": false, 00:31:59.182 "data_offset": 0, 00:31:59.182 "data_size": 65536 00:31:59.182 }, 00:31:59.182 { 00:31:59.182 "name": "BaseBdev3", 00:31:59.182 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:59.182 "is_configured": true, 00:31:59.182 "data_offset": 0, 00:31:59.182 "data_size": 65536 00:31:59.182 }, 00:31:59.182 { 00:31:59.182 "name": "BaseBdev4", 00:31:59.182 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:59.182 "is_configured": true, 00:31:59.182 "data_offset": 0, 00:31:59.182 "data_size": 65536 00:31:59.182 } 00:31:59.182 ] 00:31:59.182 }' 00:31:59.182 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.441 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.700 "name": "raid_bdev1", 00:31:59.700 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:59.700 "strip_size_kb": 0, 00:31:59.700 "state": "online", 00:31:59.700 "raid_level": "raid1", 00:31:59.700 "superblock": false, 00:31:59.700 "num_base_bdevs": 4, 00:31:59.700 "num_base_bdevs_discovered": 3, 00:31:59.700 "num_base_bdevs_operational": 3, 00:31:59.700 "base_bdevs_list": [ 00:31:59.700 { 00:31:59.700 "name": "spare", 00:31:59.700 "uuid": "b0750208-3b6c-5a6a-8bf4-bf4dffb62396", 00:31:59.700 "is_configured": true, 00:31:59.700 "data_offset": 0, 00:31:59.700 "data_size": 65536 00:31:59.700 }, 00:31:59.700 { 00:31:59.700 "name": null, 00:31:59.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.700 "is_configured": false, 00:31:59.700 "data_offset": 0, 00:31:59.700 "data_size": 65536 00:31:59.700 }, 00:31:59.700 { 00:31:59.700 "name": "BaseBdev3", 00:31:59.700 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:59.700 "is_configured": true, 00:31:59.700 "data_offset": 0, 00:31:59.700 "data_size": 65536 00:31:59.700 }, 00:31:59.700 { 00:31:59.700 "name": "BaseBdev4", 00:31:59.700 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:59.700 "is_configured": true, 00:31:59.700 "data_offset": 0, 00:31:59.700 "data_size": 65536 00:31:59.700 } 00:31:59.700 ] 00:31:59.700 }' 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.700 08:58:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.960 08:58:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.960 "name": "raid_bdev1", 00:31:59.960 "uuid": "7d64d0a1-33c5-4364-8e39-97b6e9d7c100", 00:31:59.960 "strip_size_kb": 0, 00:31:59.960 "state": "online", 00:31:59.960 "raid_level": "raid1", 00:31:59.960 "superblock": false, 00:31:59.960 "num_base_bdevs": 4, 00:31:59.960 "num_base_bdevs_discovered": 3, 00:31:59.960 "num_base_bdevs_operational": 3, 00:31:59.960 "base_bdevs_list": [ 00:31:59.960 { 00:31:59.960 "name": "spare", 00:31:59.960 "uuid": "b0750208-3b6c-5a6a-8bf4-bf4dffb62396", 00:31:59.960 "is_configured": true, 00:31:59.960 "data_offset": 0, 00:31:59.960 "data_size": 65536 00:31:59.960 }, 00:31:59.960 { 00:31:59.960 "name": null, 00:31:59.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.960 "is_configured": false, 00:31:59.960 "data_offset": 0, 00:31:59.960 "data_size": 65536 00:31:59.960 }, 00:31:59.960 { 00:31:59.960 "name": "BaseBdev3", 00:31:59.960 "uuid": "9e9fe3a6-ad55-5a34-bcd7-edbd3b18506f", 00:31:59.960 "is_configured": true, 00:31:59.960 "data_offset": 0, 00:31:59.960 "data_size": 65536 00:31:59.960 }, 00:31:59.960 { 00:31:59.960 "name": "BaseBdev4", 00:31:59.960 "uuid": "3a4583a7-5bd3-5335-9d51-473059a2dce4", 00:31:59.960 "is_configured": true, 00:31:59.960 "data_offset": 0, 00:31:59.960 "data_size": 65536 00:31:59.960 } 00:31:59.960 ] 00:31:59.960 }' 00:31:59.960 08:58:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.960 08:58:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.896 08:58:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:00.896 [2024-07-12 08:58:36.021456] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:00.896 [2024-07-12 08:58:36.021780] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:00.896 [2024-07-12 08:58:36.021968] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:00.896 [2024-07-12 08:58:36.022174] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:00.896 [2024-07-12 08:58:36.022276] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:32:00.896 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.896 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:01.155 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:01.414 /dev/nbd0 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:01.414 1+0 records in 00:32:01.414 1+0 records out 00:32:01.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415901 s, 9.8 MB/s 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:01.414 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:01.674 /dev/nbd1 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:01.674 1+0 records in 00:32:01.674 1+0 records out 00:32:01.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00159822 s, 2.6 MB/s 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:01.674 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:01.933 08:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:01.933 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:01.933 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:01.933 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:01.933 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:01.933 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:01.933 08:58:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:02.192 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:02.451 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:02.451 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 149317 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 149317 ']' 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 149317 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149317 00:32:02.452 killing process with pid 149317 00:32:02.452 Received shutdown signal, test time was about 60.000000 seconds 00:32:02.452 00:32:02.452 Latency(us) 00:32:02.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.452 =================================================================================================================== 00:32:02.452 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149317' 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 149317 00:32:02.452 08:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 149317 00:32:02.452 [2024-07-12 08:58:37.646332] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:03.019 [2024-07-12 08:58:38.032087] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:32:03.957 ************************************ 00:32:03.957 END TEST raid_rebuild_test 00:32:03.957 ************************************ 00:32:03.957 00:32:03.957 real 0m26.556s 00:32:03.957 user 0m37.151s 00:32:03.957 sys 0m4.428s 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.957 08:58:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:03.957 08:58:39 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:32:03.957 08:58:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:03.957 08:58:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.957 08:58:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:03.957 ************************************ 00:32:03.957 START TEST raid_rebuild_test_sb 00:32:03.957 ************************************ 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:03.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=149941 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 149941 /var/tmp/spdk-raid.sock 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 149941 ']' 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:03.957 08:58:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.216 [2024-07-12 08:58:39.215078] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:32:04.216 [2024-07-12 08:58:39.215622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149941 ] 00:32:04.216 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:04.216 Zero copy mechanism will not be used. 00:32:04.216 [2024-07-12 08:58:39.384939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.475 [2024-07-12 08:58:39.585660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.733 [2024-07-12 08:58:39.764920] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:04.992 08:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.992 08:58:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:32:04.992 08:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:04.992 08:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:05.259 BaseBdev1_malloc 00:32:05.259 08:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:05.518 [2024-07-12 08:58:40.620609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:05.518 [2024-07-12 08:58:40.620985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.518 [2024-07-12 08:58:40.621141] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:32:05.518 [2024-07-12 08:58:40.621309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.518 [2024-07-12 08:58:40.623972] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.518 [2024-07-12 08:58:40.624152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:05.518 BaseBdev1 00:32:05.518 08:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:05.518 08:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:05.776 BaseBdev2_malloc 00:32:05.776 08:58:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:06.044 [2024-07-12 08:58:41.165252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:06.044 [2024-07-12 08:58:41.165678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:06.044 [2024-07-12 08:58:41.165831] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:32:06.044 [2024-07-12 08:58:41.165946] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:06.044 [2024-07-12 08:58:41.168511] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:06.044 [2024-07-12 08:58:41.168669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:06.044 BaseBdev2 00:32:06.044 08:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:06.044 08:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:06.307 BaseBdev3_malloc 00:32:06.307 08:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:06.565 [2024-07-12 08:58:41.633382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:06.565 [2024-07-12 08:58:41.633777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:06.565 [2024-07-12 08:58:41.633866] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:32:06.565 [2024-07-12 08:58:41.634131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:06.565 [2024-07-12 08:58:41.636572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:06.565 [2024-07-12 08:58:41.636760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:06.565 BaseBdev3 00:32:06.565 08:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:06.565 08:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:06.823 BaseBdev4_malloc 00:32:06.823 08:58:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:07.082 [2024-07-12 08:58:42.131255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:07.082 [2024-07-12 08:58:42.131648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:07.082 [2024-07-12 08:58:42.131732] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:07.082 [2024-07-12 08:58:42.131858] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:07.082 [2024-07-12 08:58:42.134488] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:07.082 [2024-07-12 08:58:42.134695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:07.082 BaseBdev4 00:32:07.082 08:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:07.341 spare_malloc 00:32:07.341 08:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:07.600 spare_delay 00:32:07.600 08:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:07.859 [2024-07-12 08:58:42.863200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:07.859 [2024-07-12 08:58:42.863609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:07.859 [2024-07-12 08:58:42.863684] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:07.859 [2024-07-12 08:58:42.863935] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:07.859 [2024-07-12 08:58:42.866613] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:07.859 [2024-07-12 08:58:42.866804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:07.859 spare 00:32:07.859 08:58:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:32:08.116 [2024-07-12 08:58:43.079344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:08.116 [2024-07-12 08:58:43.081601] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:08.116 [2024-07-12 08:58:43.081727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:08.116 [2024-07-12 08:58:43.081913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:08.116 [2024-07-12 08:58:43.082310] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:32:08.116 [2024-07-12 08:58:43.082490] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:08.116 [2024-07-12 08:58:43.082729] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:08.116 [2024-07-12 08:58:43.083249] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:32:08.116 [2024-07-12 08:58:43.083418] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:32:08.116 [2024-07-12 08:58:43.083671] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.116 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.117 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.374 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:08.374 "name": "raid_bdev1", 00:32:08.374 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:08.374 "strip_size_kb": 0, 00:32:08.374 "state": "online", 00:32:08.374 "raid_level": "raid1", 00:32:08.374 "superblock": true, 00:32:08.374 "num_base_bdevs": 4, 00:32:08.374 "num_base_bdevs_discovered": 4, 00:32:08.374 "num_base_bdevs_operational": 4, 00:32:08.374 "base_bdevs_list": [ 00:32:08.374 { 00:32:08.374 "name": "BaseBdev1", 00:32:08.374 "uuid": "59f2ff9a-d25a-5e8d-a4c8-77f84ecb2780", 00:32:08.374 "is_configured": true, 00:32:08.374 "data_offset": 2048, 00:32:08.374 "data_size": 63488 00:32:08.374 }, 00:32:08.374 { 00:32:08.374 "name": "BaseBdev2", 00:32:08.374 "uuid": "cd5f544e-8a50-5cc0-827c-13db017edc04", 00:32:08.374 "is_configured": true, 00:32:08.374 "data_offset": 2048, 00:32:08.374 "data_size": 63488 00:32:08.374 }, 00:32:08.374 { 00:32:08.374 "name": "BaseBdev3", 00:32:08.374 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:08.374 "is_configured": true, 00:32:08.374 "data_offset": 2048, 00:32:08.374 "data_size": 63488 00:32:08.374 }, 00:32:08.374 { 00:32:08.374 "name": "BaseBdev4", 00:32:08.374 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:08.374 "is_configured": true, 00:32:08.374 "data_offset": 2048, 00:32:08.374 "data_size": 63488 00:32:08.374 } 00:32:08.374 ] 00:32:08.374 }' 00:32:08.374 08:58:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:08.374 08:58:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.947 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:08.947 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:09.220 [2024-07-12 08:58:44.328181] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:09.220 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:32:09.220 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.220 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.489 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:09.747 [2024-07-12 08:58:44.816066] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:09.747 /dev/nbd0 00:32:09.747 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:09.747 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:09.747 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:09.747 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:32:09.747 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:09.747 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:09.748 1+0 records in 00:32:09.748 1+0 records out 00:32:09.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728629 s, 5.6 MB/s 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:32:09.748 08:58:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:32:17.862 63488+0 records in 00:32:17.862 63488+0 records out 00:32:17.862 32505856 bytes (33 MB, 31 MiB) copied, 7.23687 s, 4.5 MB/s 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:17.862 [2024-07-12 08:58:52.396103] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:17.862 [2024-07-12 08:58:52.623714] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.862 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.862 "name": "raid_bdev1", 00:32:17.862 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:17.862 "strip_size_kb": 0, 00:32:17.862 "state": "online", 00:32:17.862 "raid_level": "raid1", 00:32:17.862 "superblock": true, 00:32:17.862 "num_base_bdevs": 4, 00:32:17.862 "num_base_bdevs_discovered": 3, 00:32:17.862 "num_base_bdevs_operational": 3, 00:32:17.862 "base_bdevs_list": [ 00:32:17.863 { 00:32:17.863 "name": null, 00:32:17.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.863 "is_configured": false, 00:32:17.863 "data_offset": 2048, 00:32:17.863 "data_size": 63488 00:32:17.863 }, 00:32:17.863 { 00:32:17.863 "name": "BaseBdev2", 00:32:17.863 "uuid": "cd5f544e-8a50-5cc0-827c-13db017edc04", 00:32:17.863 "is_configured": true, 00:32:17.863 "data_offset": 2048, 00:32:17.863 "data_size": 63488 00:32:17.863 }, 00:32:17.863 { 00:32:17.863 "name": "BaseBdev3", 00:32:17.863 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:17.863 "is_configured": true, 00:32:17.863 "data_offset": 2048, 00:32:17.863 "data_size": 63488 00:32:17.863 }, 00:32:17.863 { 00:32:17.863 "name": "BaseBdev4", 00:32:17.863 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:17.863 "is_configured": true, 00:32:17.863 "data_offset": 2048, 00:32:17.863 "data_size": 63488 00:32:17.863 } 00:32:17.863 ] 00:32:17.863 }' 00:32:17.863 08:58:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.863 08:58:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:18.427 08:58:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:18.685 [2024-07-12 08:58:53.808002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:18.685 [2024-07-12 08:58:53.820132] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca53e0 00:32:18.685 [2024-07-12 08:58:53.822417] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:18.686 08:58:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:20.063 08:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:20.063 08:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:20.063 08:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:20.063 08:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:20.063 08:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:20.063 08:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.063 08:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.063 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:20.063 "name": "raid_bdev1", 00:32:20.063 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:20.063 "strip_size_kb": 0, 00:32:20.063 "state": "online", 00:32:20.063 "raid_level": "raid1", 00:32:20.063 "superblock": true, 00:32:20.063 "num_base_bdevs": 4, 00:32:20.063 "num_base_bdevs_discovered": 4, 00:32:20.063 "num_base_bdevs_operational": 4, 00:32:20.063 "process": { 00:32:20.063 "type": "rebuild", 00:32:20.063 "target": "spare", 00:32:20.063 "progress": { 00:32:20.063 "blocks": 24576, 00:32:20.063 "percent": 38 00:32:20.063 } 00:32:20.063 }, 00:32:20.063 "base_bdevs_list": [ 00:32:20.063 { 00:32:20.063 "name": "spare", 00:32:20.063 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:20.063 "is_configured": true, 00:32:20.063 "data_offset": 2048, 00:32:20.063 "data_size": 63488 00:32:20.063 }, 00:32:20.063 { 00:32:20.063 "name": "BaseBdev2", 00:32:20.063 "uuid": "cd5f544e-8a50-5cc0-827c-13db017edc04", 00:32:20.063 "is_configured": true, 00:32:20.063 "data_offset": 2048, 00:32:20.063 "data_size": 63488 00:32:20.063 }, 00:32:20.063 { 00:32:20.063 "name": "BaseBdev3", 00:32:20.063 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:20.063 "is_configured": true, 00:32:20.063 "data_offset": 2048, 00:32:20.063 "data_size": 63488 00:32:20.063 }, 00:32:20.063 { 00:32:20.063 "name": "BaseBdev4", 00:32:20.063 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:20.063 "is_configured": true, 00:32:20.063 "data_offset": 2048, 00:32:20.063 "data_size": 63488 00:32:20.063 } 00:32:20.063 ] 00:32:20.063 }' 00:32:20.063 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:20.063 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:20.063 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:20.063 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:20.063 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:20.322 [2024-07-12 08:58:55.436927] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:20.581 [2024-07-12 08:58:55.534217] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:20.581 [2024-07-12 08:58:55.534611] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:20.581 [2024-07-12 08:58:55.534742] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:20.581 [2024-07-12 08:58:55.534782] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.581 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.840 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:20.840 "name": "raid_bdev1", 00:32:20.840 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:20.840 "strip_size_kb": 0, 00:32:20.840 "state": "online", 00:32:20.840 "raid_level": "raid1", 00:32:20.840 "superblock": true, 00:32:20.840 "num_base_bdevs": 4, 00:32:20.840 "num_base_bdevs_discovered": 3, 00:32:20.840 "num_base_bdevs_operational": 3, 00:32:20.840 "base_bdevs_list": [ 00:32:20.840 { 00:32:20.840 "name": null, 00:32:20.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.840 "is_configured": false, 00:32:20.840 "data_offset": 2048, 00:32:20.840 "data_size": 63488 00:32:20.840 }, 00:32:20.840 { 00:32:20.840 "name": "BaseBdev2", 00:32:20.840 "uuid": "cd5f544e-8a50-5cc0-827c-13db017edc04", 00:32:20.840 "is_configured": true, 00:32:20.840 "data_offset": 2048, 00:32:20.840 "data_size": 63488 00:32:20.840 }, 00:32:20.840 { 00:32:20.840 "name": "BaseBdev3", 00:32:20.840 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:20.840 "is_configured": true, 00:32:20.840 "data_offset": 2048, 00:32:20.840 "data_size": 63488 00:32:20.840 }, 00:32:20.840 { 00:32:20.840 "name": "BaseBdev4", 00:32:20.840 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:20.840 "is_configured": true, 00:32:20.840 "data_offset": 2048, 00:32:20.840 "data_size": 63488 00:32:20.840 } 00:32:20.840 ] 00:32:20.840 }' 00:32:20.840 08:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:20.840 08:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.407 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:21.407 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:21.407 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:21.407 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:21.407 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:21.407 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.407 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.666 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:21.666 "name": "raid_bdev1", 00:32:21.666 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:21.666 "strip_size_kb": 0, 00:32:21.666 "state": "online", 00:32:21.666 "raid_level": "raid1", 00:32:21.666 "superblock": true, 00:32:21.666 "num_base_bdevs": 4, 00:32:21.666 "num_base_bdevs_discovered": 3, 00:32:21.666 "num_base_bdevs_operational": 3, 00:32:21.666 "base_bdevs_list": [ 00:32:21.666 { 00:32:21.666 "name": null, 00:32:21.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.666 "is_configured": false, 00:32:21.666 "data_offset": 2048, 00:32:21.666 "data_size": 63488 00:32:21.666 }, 00:32:21.666 { 00:32:21.666 "name": "BaseBdev2", 00:32:21.666 "uuid": "cd5f544e-8a50-5cc0-827c-13db017edc04", 00:32:21.666 "is_configured": true, 00:32:21.666 "data_offset": 2048, 00:32:21.666 "data_size": 63488 00:32:21.666 }, 00:32:21.666 { 00:32:21.666 "name": "BaseBdev3", 00:32:21.666 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:21.666 "is_configured": true, 00:32:21.666 "data_offset": 2048, 00:32:21.666 "data_size": 63488 00:32:21.666 }, 00:32:21.666 { 00:32:21.666 "name": "BaseBdev4", 00:32:21.666 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:21.666 "is_configured": true, 00:32:21.666 "data_offset": 2048, 00:32:21.666 "data_size": 63488 00:32:21.666 } 00:32:21.666 ] 00:32:21.666 }' 00:32:21.666 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:21.666 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:21.666 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:21.925 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:21.925 08:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:21.925 [2024-07-12 08:58:57.099911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:21.925 [2024-07-12 08:58:57.112478] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5580 00:32:21.925 [2024-07-12 08:58:57.114793] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:22.184 08:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:23.121 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:23.121 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.121 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:23.121 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:23.121 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.121 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.121 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.380 "name": "raid_bdev1", 00:32:23.380 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:23.380 "strip_size_kb": 0, 00:32:23.380 "state": "online", 00:32:23.380 "raid_level": "raid1", 00:32:23.380 "superblock": true, 00:32:23.380 "num_base_bdevs": 4, 00:32:23.380 "num_base_bdevs_discovered": 4, 00:32:23.380 "num_base_bdevs_operational": 4, 00:32:23.380 "process": { 00:32:23.380 "type": "rebuild", 00:32:23.380 "target": "spare", 00:32:23.380 "progress": { 00:32:23.380 "blocks": 24576, 00:32:23.380 "percent": 38 00:32:23.380 } 00:32:23.380 }, 00:32:23.380 "base_bdevs_list": [ 00:32:23.380 { 00:32:23.380 "name": "spare", 00:32:23.380 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:23.380 "is_configured": true, 00:32:23.380 "data_offset": 2048, 00:32:23.380 "data_size": 63488 00:32:23.380 }, 00:32:23.380 { 00:32:23.380 "name": "BaseBdev2", 00:32:23.380 "uuid": "cd5f544e-8a50-5cc0-827c-13db017edc04", 00:32:23.380 "is_configured": true, 00:32:23.380 "data_offset": 2048, 00:32:23.380 "data_size": 63488 00:32:23.380 }, 00:32:23.380 { 00:32:23.380 "name": "BaseBdev3", 00:32:23.380 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:23.380 "is_configured": true, 00:32:23.380 "data_offset": 2048, 00:32:23.380 "data_size": 63488 00:32:23.380 }, 00:32:23.380 { 00:32:23.380 "name": "BaseBdev4", 00:32:23.380 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:23.380 "is_configured": true, 00:32:23.380 "data_offset": 2048, 00:32:23.380 "data_size": 63488 00:32:23.380 } 00:32:23.380 ] 00:32:23.380 }' 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:32:23.380 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:32:23.380 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:23.639 [2024-07-12 08:58:58.805101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:23.898 [2024-07-12 08:58:58.926375] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5580 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.898 08:58:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.157 "name": "raid_bdev1", 00:32:24.157 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:24.157 "strip_size_kb": 0, 00:32:24.157 "state": "online", 00:32:24.157 "raid_level": "raid1", 00:32:24.157 "superblock": true, 00:32:24.157 "num_base_bdevs": 4, 00:32:24.157 "num_base_bdevs_discovered": 3, 00:32:24.157 "num_base_bdevs_operational": 3, 00:32:24.157 "process": { 00:32:24.157 "type": "rebuild", 00:32:24.157 "target": "spare", 00:32:24.157 "progress": { 00:32:24.157 "blocks": 38912, 00:32:24.157 "percent": 61 00:32:24.157 } 00:32:24.157 }, 00:32:24.157 "base_bdevs_list": [ 00:32:24.157 { 00:32:24.157 "name": "spare", 00:32:24.157 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:24.157 "is_configured": true, 00:32:24.157 "data_offset": 2048, 00:32:24.157 "data_size": 63488 00:32:24.157 }, 00:32:24.157 { 00:32:24.157 "name": null, 00:32:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.157 "is_configured": false, 00:32:24.157 "data_offset": 2048, 00:32:24.157 "data_size": 63488 00:32:24.157 }, 00:32:24.157 { 00:32:24.157 "name": "BaseBdev3", 00:32:24.157 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:24.157 "is_configured": true, 00:32:24.157 "data_offset": 2048, 00:32:24.157 "data_size": 63488 00:32:24.157 }, 00:32:24.157 { 00:32:24.157 "name": "BaseBdev4", 00:32:24.157 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:24.157 "is_configured": true, 00:32:24.157 "data_offset": 2048, 00:32:24.157 "data_size": 63488 00:32:24.157 } 00:32:24.157 ] 00:32:24.157 }' 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1038 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:24.157 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:24.158 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.158 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.417 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.417 "name": "raid_bdev1", 00:32:24.417 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:24.417 "strip_size_kb": 0, 00:32:24.417 "state": "online", 00:32:24.417 "raid_level": "raid1", 00:32:24.417 "superblock": true, 00:32:24.417 "num_base_bdevs": 4, 00:32:24.417 "num_base_bdevs_discovered": 3, 00:32:24.417 "num_base_bdevs_operational": 3, 00:32:24.417 "process": { 00:32:24.417 "type": "rebuild", 00:32:24.417 "target": "spare", 00:32:24.417 "progress": { 00:32:24.417 "blocks": 47104, 00:32:24.417 "percent": 74 00:32:24.417 } 00:32:24.417 }, 00:32:24.417 "base_bdevs_list": [ 00:32:24.417 { 00:32:24.417 "name": "spare", 00:32:24.417 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:24.417 "is_configured": true, 00:32:24.417 "data_offset": 2048, 00:32:24.417 "data_size": 63488 00:32:24.417 }, 00:32:24.417 { 00:32:24.417 "name": null, 00:32:24.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.417 "is_configured": false, 00:32:24.417 "data_offset": 2048, 00:32:24.417 "data_size": 63488 00:32:24.417 }, 00:32:24.417 { 00:32:24.417 "name": "BaseBdev3", 00:32:24.417 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:24.417 "is_configured": true, 00:32:24.417 "data_offset": 2048, 00:32:24.417 "data_size": 63488 00:32:24.417 }, 00:32:24.417 { 00:32:24.417 "name": "BaseBdev4", 00:32:24.417 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:24.417 "is_configured": true, 00:32:24.417 "data_offset": 2048, 00:32:24.417 "data_size": 63488 00:32:24.417 } 00:32:24.417 ] 00:32:24.417 }' 00:32:24.417 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:24.417 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:24.417 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:24.676 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:24.676 08:58:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:25.243 [2024-07-12 08:59:00.336099] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:25.243 [2024-07-12 08:59:00.336523] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:25.243 [2024-07-12 08:59:00.336802] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.502 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.762 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:25.762 "name": "raid_bdev1", 00:32:25.762 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:25.762 "strip_size_kb": 0, 00:32:25.762 "state": "online", 00:32:25.762 "raid_level": "raid1", 00:32:25.762 "superblock": true, 00:32:25.762 "num_base_bdevs": 4, 00:32:25.762 "num_base_bdevs_discovered": 3, 00:32:25.762 "num_base_bdevs_operational": 3, 00:32:25.762 "base_bdevs_list": [ 00:32:25.762 { 00:32:25.762 "name": "spare", 00:32:25.762 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:25.762 "is_configured": true, 00:32:25.762 "data_offset": 2048, 00:32:25.762 "data_size": 63488 00:32:25.762 }, 00:32:25.762 { 00:32:25.762 "name": null, 00:32:25.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.762 "is_configured": false, 00:32:25.762 "data_offset": 2048, 00:32:25.762 "data_size": 63488 00:32:25.762 }, 00:32:25.762 { 00:32:25.762 "name": "BaseBdev3", 00:32:25.762 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:25.762 "is_configured": true, 00:32:25.762 "data_offset": 2048, 00:32:25.762 "data_size": 63488 00:32:25.762 }, 00:32:25.762 { 00:32:25.762 "name": "BaseBdev4", 00:32:25.762 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:25.762 "is_configured": true, 00:32:25.762 "data_offset": 2048, 00:32:25.762 "data_size": 63488 00:32:25.762 } 00:32:25.762 ] 00:32:25.762 }' 00:32:25.762 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:26.021 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:26.021 08:59:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.021 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.280 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:26.280 "name": "raid_bdev1", 00:32:26.280 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:26.280 "strip_size_kb": 0, 00:32:26.280 "state": "online", 00:32:26.280 "raid_level": "raid1", 00:32:26.280 "superblock": true, 00:32:26.280 "num_base_bdevs": 4, 00:32:26.280 "num_base_bdevs_discovered": 3, 00:32:26.280 "num_base_bdevs_operational": 3, 00:32:26.280 "base_bdevs_list": [ 00:32:26.280 { 00:32:26.280 "name": "spare", 00:32:26.280 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:26.280 "is_configured": true, 00:32:26.280 "data_offset": 2048, 00:32:26.280 "data_size": 63488 00:32:26.280 }, 00:32:26.280 { 00:32:26.280 "name": null, 00:32:26.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.280 "is_configured": false, 00:32:26.280 "data_offset": 2048, 00:32:26.280 "data_size": 63488 00:32:26.280 }, 00:32:26.280 { 00:32:26.280 "name": "BaseBdev3", 00:32:26.280 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:26.280 "is_configured": true, 00:32:26.280 "data_offset": 2048, 00:32:26.280 "data_size": 63488 00:32:26.280 }, 00:32:26.280 { 00:32:26.280 "name": "BaseBdev4", 00:32:26.280 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:26.280 "is_configured": true, 00:32:26.280 "data_offset": 2048, 00:32:26.280 "data_size": 63488 00:32:26.280 } 00:32:26.280 ] 00:32:26.280 }' 00:32:26.280 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:26.280 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:26.280 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:26.280 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:26.280 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.281 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.540 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:26.540 "name": "raid_bdev1", 00:32:26.540 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:26.540 "strip_size_kb": 0, 00:32:26.540 "state": "online", 00:32:26.540 "raid_level": "raid1", 00:32:26.540 "superblock": true, 00:32:26.540 "num_base_bdevs": 4, 00:32:26.540 "num_base_bdevs_discovered": 3, 00:32:26.540 "num_base_bdevs_operational": 3, 00:32:26.540 "base_bdevs_list": [ 00:32:26.540 { 00:32:26.540 "name": "spare", 00:32:26.540 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:26.540 "is_configured": true, 00:32:26.540 "data_offset": 2048, 00:32:26.540 "data_size": 63488 00:32:26.540 }, 00:32:26.540 { 00:32:26.540 "name": null, 00:32:26.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.540 "is_configured": false, 00:32:26.540 "data_offset": 2048, 00:32:26.540 "data_size": 63488 00:32:26.540 }, 00:32:26.540 { 00:32:26.540 "name": "BaseBdev3", 00:32:26.540 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:26.540 "is_configured": true, 00:32:26.540 "data_offset": 2048, 00:32:26.540 "data_size": 63488 00:32:26.540 }, 00:32:26.540 { 00:32:26.540 "name": "BaseBdev4", 00:32:26.540 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:26.540 "is_configured": true, 00:32:26.540 "data_offset": 2048, 00:32:26.540 "data_size": 63488 00:32:26.540 } 00:32:26.540 ] 00:32:26.540 }' 00:32:26.540 08:59:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:26.540 08:59:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:27.499 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:27.499 [2024-07-12 08:59:02.682374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:27.499 [2024-07-12 08:59:02.682727] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:27.499 [2024-07-12 08:59:02.682919] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:27.499 [2024-07-12 08:59:02.683119] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:27.499 [2024-07-12 08:59:02.683225] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:32:27.758 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.758 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:28.016 08:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:28.274 /dev/nbd0 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:28.274 1+0 records in 00:32:28.274 1+0 records out 00:32:28.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524556 s, 7.8 MB/s 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:28.274 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:28.532 /dev/nbd1 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:28.532 1+0 records in 00:32:28.532 1+0 records out 00:32:28.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128799 s, 3.2 MB/s 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:28.532 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:28.533 08:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:32:28.533 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:28.533 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:28.533 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:28.791 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:28.791 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:28.791 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:28.791 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:28.791 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:28.791 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:28.791 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:29.049 08:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:29.308 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:29.567 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:29.839 [2024-07-12 08:59:04.793982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:29.839 [2024-07-12 08:59:04.794406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:29.839 [2024-07-12 08:59:04.794583] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:32:29.839 [2024-07-12 08:59:04.794752] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:29.839 [2024-07-12 08:59:04.797275] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:29.839 [2024-07-12 08:59:04.797472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:29.839 [2024-07-12 08:59:04.797704] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:29.839 [2024-07-12 08:59:04.797867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:29.839 [2024-07-12 08:59:04.798158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:29.839 [2024-07-12 08:59:04.798471] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:29.839 spare 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.839 08:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.839 [2024-07-12 08:59:04.898703] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:32:29.839 [2024-07-12 08:59:04.899005] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:29.839 [2024-07-12 08:59:04.899252] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5a40 00:32:29.840 [2024-07-12 08:59:04.899796] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:32:29.840 [2024-07-12 08:59:04.899954] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:32:29.840 [2024-07-12 08:59:04.900247] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.103 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:30.103 "name": "raid_bdev1", 00:32:30.103 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:30.103 "strip_size_kb": 0, 00:32:30.103 "state": "online", 00:32:30.103 "raid_level": "raid1", 00:32:30.103 "superblock": true, 00:32:30.103 "num_base_bdevs": 4, 00:32:30.103 "num_base_bdevs_discovered": 3, 00:32:30.103 "num_base_bdevs_operational": 3, 00:32:30.103 "base_bdevs_list": [ 00:32:30.103 { 00:32:30.103 "name": "spare", 00:32:30.103 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:30.103 "is_configured": true, 00:32:30.103 "data_offset": 2048, 00:32:30.103 "data_size": 63488 00:32:30.103 }, 00:32:30.103 { 00:32:30.103 "name": null, 00:32:30.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.103 "is_configured": false, 00:32:30.103 "data_offset": 2048, 00:32:30.103 "data_size": 63488 00:32:30.103 }, 00:32:30.103 { 00:32:30.103 "name": "BaseBdev3", 00:32:30.103 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:30.103 "is_configured": true, 00:32:30.103 "data_offset": 2048, 00:32:30.103 "data_size": 63488 00:32:30.103 }, 00:32:30.103 { 00:32:30.104 "name": "BaseBdev4", 00:32:30.104 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:30.104 "is_configured": true, 00:32:30.104 "data_offset": 2048, 00:32:30.104 "data_size": 63488 00:32:30.104 } 00:32:30.104 ] 00:32:30.104 }' 00:32:30.104 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:30.104 08:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.669 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:30.669 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:30.669 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:30.669 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:30.669 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:30.669 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.669 08:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.940 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:30.940 "name": "raid_bdev1", 00:32:30.940 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:30.940 "strip_size_kb": 0, 00:32:30.940 "state": "online", 00:32:30.940 "raid_level": "raid1", 00:32:30.940 "superblock": true, 00:32:30.940 "num_base_bdevs": 4, 00:32:30.940 "num_base_bdevs_discovered": 3, 00:32:30.940 "num_base_bdevs_operational": 3, 00:32:30.940 "base_bdevs_list": [ 00:32:30.940 { 00:32:30.940 "name": "spare", 00:32:30.940 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:30.940 "is_configured": true, 00:32:30.940 "data_offset": 2048, 00:32:30.940 "data_size": 63488 00:32:30.940 }, 00:32:30.940 { 00:32:30.940 "name": null, 00:32:30.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.940 "is_configured": false, 00:32:30.940 "data_offset": 2048, 00:32:30.940 "data_size": 63488 00:32:30.940 }, 00:32:30.940 { 00:32:30.940 "name": "BaseBdev3", 00:32:30.940 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:30.940 "is_configured": true, 00:32:30.940 "data_offset": 2048, 00:32:30.940 "data_size": 63488 00:32:30.940 }, 00:32:30.940 { 00:32:30.940 "name": "BaseBdev4", 00:32:30.940 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:30.940 "is_configured": true, 00:32:30.940 "data_offset": 2048, 00:32:30.940 "data_size": 63488 00:32:30.940 } 00:32:30.940 ] 00:32:30.940 }' 00:32:30.940 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:30.940 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:30.940 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:31.212 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:31.212 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.212 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:31.212 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:31.213 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:31.472 [2024-07-12 08:59:06.570786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.472 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.731 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:31.731 "name": "raid_bdev1", 00:32:31.731 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:31.731 "strip_size_kb": 0, 00:32:31.731 "state": "online", 00:32:31.731 "raid_level": "raid1", 00:32:31.731 "superblock": true, 00:32:31.731 "num_base_bdevs": 4, 00:32:31.731 "num_base_bdevs_discovered": 2, 00:32:31.731 "num_base_bdevs_operational": 2, 00:32:31.731 "base_bdevs_list": [ 00:32:31.731 { 00:32:31.731 "name": null, 00:32:31.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.731 "is_configured": false, 00:32:31.731 "data_offset": 2048, 00:32:31.731 "data_size": 63488 00:32:31.731 }, 00:32:31.731 { 00:32:31.731 "name": null, 00:32:31.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.731 "is_configured": false, 00:32:31.731 "data_offset": 2048, 00:32:31.731 "data_size": 63488 00:32:31.731 }, 00:32:31.731 { 00:32:31.731 "name": "BaseBdev3", 00:32:31.731 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:31.731 "is_configured": true, 00:32:31.731 "data_offset": 2048, 00:32:31.731 "data_size": 63488 00:32:31.731 }, 00:32:31.731 { 00:32:31.731 "name": "BaseBdev4", 00:32:31.731 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:31.731 "is_configured": true, 00:32:31.731 "data_offset": 2048, 00:32:31.731 "data_size": 63488 00:32:31.731 } 00:32:31.731 ] 00:32:31.731 }' 00:32:31.731 08:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:31.731 08:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.666 08:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:32.666 [2024-07-12 08:59:07.771089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:32.666 [2024-07-12 08:59:07.771609] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:32:32.666 [2024-07-12 08:59:07.771735] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:32.666 [2024-07-12 08:59:07.771838] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:32.666 [2024-07-12 08:59:07.782959] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5be0 00:32:32.666 [2024-07-12 08:59:07.785103] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:32.666 08:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:34.044 08:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:34.044 08:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:34.044 08:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:34.044 08:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:34.044 08:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:34.044 08:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.044 08:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.044 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:34.044 "name": "raid_bdev1", 00:32:34.044 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:34.044 "strip_size_kb": 0, 00:32:34.044 "state": "online", 00:32:34.044 "raid_level": "raid1", 00:32:34.044 "superblock": true, 00:32:34.044 "num_base_bdevs": 4, 00:32:34.044 "num_base_bdevs_discovered": 3, 00:32:34.044 "num_base_bdevs_operational": 3, 00:32:34.044 "process": { 00:32:34.044 "type": "rebuild", 00:32:34.044 "target": "spare", 00:32:34.044 "progress": { 00:32:34.044 "blocks": 24576, 00:32:34.044 "percent": 38 00:32:34.044 } 00:32:34.044 }, 00:32:34.044 "base_bdevs_list": [ 00:32:34.044 { 00:32:34.044 "name": "spare", 00:32:34.044 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:34.044 "is_configured": true, 00:32:34.044 "data_offset": 2048, 00:32:34.044 "data_size": 63488 00:32:34.044 }, 00:32:34.044 { 00:32:34.044 "name": null, 00:32:34.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.044 "is_configured": false, 00:32:34.044 "data_offset": 2048, 00:32:34.044 "data_size": 63488 00:32:34.044 }, 00:32:34.044 { 00:32:34.044 "name": "BaseBdev3", 00:32:34.044 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:34.044 "is_configured": true, 00:32:34.044 "data_offset": 2048, 00:32:34.044 "data_size": 63488 00:32:34.044 }, 00:32:34.044 { 00:32:34.044 "name": "BaseBdev4", 00:32:34.044 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:34.044 "is_configured": true, 00:32:34.044 "data_offset": 2048, 00:32:34.044 "data_size": 63488 00:32:34.044 } 00:32:34.044 ] 00:32:34.044 }' 00:32:34.044 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:34.044 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:34.044 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:34.044 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:34.044 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:34.303 [2024-07-12 08:59:09.403373] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:34.303 [2024-07-12 08:59:09.496520] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:34.303 [2024-07-12 08:59:09.496948] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.303 [2024-07-12 08:59:09.497113] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:34.303 [2024-07-12 08:59:09.497155] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.562 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.821 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:34.821 "name": "raid_bdev1", 00:32:34.821 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:34.821 "strip_size_kb": 0, 00:32:34.821 "state": "online", 00:32:34.821 "raid_level": "raid1", 00:32:34.821 "superblock": true, 00:32:34.821 "num_base_bdevs": 4, 00:32:34.821 "num_base_bdevs_discovered": 2, 00:32:34.821 "num_base_bdevs_operational": 2, 00:32:34.822 "base_bdevs_list": [ 00:32:34.822 { 00:32:34.822 "name": null, 00:32:34.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.822 "is_configured": false, 00:32:34.822 "data_offset": 2048, 00:32:34.822 "data_size": 63488 00:32:34.822 }, 00:32:34.822 { 00:32:34.822 "name": null, 00:32:34.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.822 "is_configured": false, 00:32:34.822 "data_offset": 2048, 00:32:34.822 "data_size": 63488 00:32:34.822 }, 00:32:34.822 { 00:32:34.822 "name": "BaseBdev3", 00:32:34.822 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:34.822 "is_configured": true, 00:32:34.822 "data_offset": 2048, 00:32:34.822 "data_size": 63488 00:32:34.822 }, 00:32:34.822 { 00:32:34.822 "name": "BaseBdev4", 00:32:34.822 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:34.822 "is_configured": true, 00:32:34.822 "data_offset": 2048, 00:32:34.822 "data_size": 63488 00:32:34.822 } 00:32:34.822 ] 00:32:34.822 }' 00:32:34.822 08:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:34.822 08:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.390 08:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:35.648 [2024-07-12 08:59:10.702406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:35.648 [2024-07-12 08:59:10.702829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.648 [2024-07-12 08:59:10.703000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:32:35.648 [2024-07-12 08:59:10.703143] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.648 [2024-07-12 08:59:10.703845] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.648 [2024-07-12 08:59:10.704003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:35.648 [2024-07-12 08:59:10.704301] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:35.648 [2024-07-12 08:59:10.704420] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:32:35.648 [2024-07-12 08:59:10.704516] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:35.648 [2024-07-12 08:59:10.704594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:35.648 [2024-07-12 08:59:10.716196] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:32:35.648 spare 00:32:35.648 [2024-07-12 08:59:10.718506] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:35.649 08:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:36.585 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:36.585 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:36.585 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:36.585 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:36.585 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:36.585 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.585 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.844 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.844 "name": "raid_bdev1", 00:32:36.844 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:36.844 "strip_size_kb": 0, 00:32:36.844 "state": "online", 00:32:36.844 "raid_level": "raid1", 00:32:36.844 "superblock": true, 00:32:36.844 "num_base_bdevs": 4, 00:32:36.844 "num_base_bdevs_discovered": 3, 00:32:36.844 "num_base_bdevs_operational": 3, 00:32:36.844 "process": { 00:32:36.844 "type": "rebuild", 00:32:36.844 "target": "spare", 00:32:36.844 "progress": { 00:32:36.844 "blocks": 24576, 00:32:36.844 "percent": 38 00:32:36.844 } 00:32:36.844 }, 00:32:36.844 "base_bdevs_list": [ 00:32:36.844 { 00:32:36.844 "name": "spare", 00:32:36.844 "uuid": "852bc7be-c919-56e0-ab22-53d65d558e37", 00:32:36.844 "is_configured": true, 00:32:36.844 "data_offset": 2048, 00:32:36.844 "data_size": 63488 00:32:36.844 }, 00:32:36.844 { 00:32:36.844 "name": null, 00:32:36.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.844 "is_configured": false, 00:32:36.844 "data_offset": 2048, 00:32:36.844 "data_size": 63488 00:32:36.844 }, 00:32:36.844 { 00:32:36.844 "name": "BaseBdev3", 00:32:36.844 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:36.844 "is_configured": true, 00:32:36.844 "data_offset": 2048, 00:32:36.844 "data_size": 63488 00:32:36.844 }, 00:32:36.844 { 00:32:36.844 "name": "BaseBdev4", 00:32:36.844 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:36.844 "is_configured": true, 00:32:36.844 "data_offset": 2048, 00:32:36.844 "data_size": 63488 00:32:36.844 } 00:32:36.844 ] 00:32:36.844 }' 00:32:36.844 08:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:37.102 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:37.102 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:37.102 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:37.102 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:37.361 [2024-07-12 08:59:12.360736] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:37.361 [2024-07-12 08:59:12.429772] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:37.361 [2024-07-12 08:59:12.430040] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:37.361 [2024-07-12 08:59:12.430096] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:37.361 [2024-07-12 08:59:12.430253] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.361 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.620 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:37.620 "name": "raid_bdev1", 00:32:37.620 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:37.620 "strip_size_kb": 0, 00:32:37.620 "state": "online", 00:32:37.620 "raid_level": "raid1", 00:32:37.620 "superblock": true, 00:32:37.620 "num_base_bdevs": 4, 00:32:37.620 "num_base_bdevs_discovered": 2, 00:32:37.620 "num_base_bdevs_operational": 2, 00:32:37.620 "base_bdevs_list": [ 00:32:37.620 { 00:32:37.620 "name": null, 00:32:37.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.620 "is_configured": false, 00:32:37.620 "data_offset": 2048, 00:32:37.620 "data_size": 63488 00:32:37.620 }, 00:32:37.620 { 00:32:37.620 "name": null, 00:32:37.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.620 "is_configured": false, 00:32:37.620 "data_offset": 2048, 00:32:37.620 "data_size": 63488 00:32:37.620 }, 00:32:37.620 { 00:32:37.620 "name": "BaseBdev3", 00:32:37.620 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:37.620 "is_configured": true, 00:32:37.620 "data_offset": 2048, 00:32:37.620 "data_size": 63488 00:32:37.620 }, 00:32:37.620 { 00:32:37.620 "name": "BaseBdev4", 00:32:37.620 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:37.620 "is_configured": true, 00:32:37.620 "data_offset": 2048, 00:32:37.620 "data_size": 63488 00:32:37.620 } 00:32:37.620 ] 00:32:37.620 }' 00:32:37.620 08:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:37.620 08:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:38.554 "name": "raid_bdev1", 00:32:38.554 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:38.554 "strip_size_kb": 0, 00:32:38.554 "state": "online", 00:32:38.554 "raid_level": "raid1", 00:32:38.554 "superblock": true, 00:32:38.554 "num_base_bdevs": 4, 00:32:38.554 "num_base_bdevs_discovered": 2, 00:32:38.554 "num_base_bdevs_operational": 2, 00:32:38.554 "base_bdevs_list": [ 00:32:38.554 { 00:32:38.554 "name": null, 00:32:38.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.554 "is_configured": false, 00:32:38.554 "data_offset": 2048, 00:32:38.554 "data_size": 63488 00:32:38.554 }, 00:32:38.554 { 00:32:38.554 "name": null, 00:32:38.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.554 "is_configured": false, 00:32:38.554 "data_offset": 2048, 00:32:38.554 "data_size": 63488 00:32:38.554 }, 00:32:38.554 { 00:32:38.554 "name": "BaseBdev3", 00:32:38.554 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:38.554 "is_configured": true, 00:32:38.554 "data_offset": 2048, 00:32:38.554 "data_size": 63488 00:32:38.554 }, 00:32:38.554 { 00:32:38.554 "name": "BaseBdev4", 00:32:38.554 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:38.554 "is_configured": true, 00:32:38.554 "data_offset": 2048, 00:32:38.554 "data_size": 63488 00:32:38.554 } 00:32:38.554 ] 00:32:38.554 }' 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:38.554 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:38.813 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:38.813 08:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:39.071 08:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:39.329 [2024-07-12 08:59:14.330779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:39.329 [2024-07-12 08:59:14.331173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:39.329 [2024-07-12 08:59:14.331258] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:32:39.329 [2024-07-12 08:59:14.331498] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:39.329 [2024-07-12 08:59:14.332077] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:39.329 [2024-07-12 08:59:14.332272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:39.329 [2024-07-12 08:59:14.332514] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:39.329 [2024-07-12 08:59:14.332631] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:32:39.329 [2024-07-12 08:59:14.332725] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:39.329 BaseBdev1 00:32:39.329 08:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.261 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.519 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:40.519 "name": "raid_bdev1", 00:32:40.519 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:40.519 "strip_size_kb": 0, 00:32:40.519 "state": "online", 00:32:40.519 "raid_level": "raid1", 00:32:40.519 "superblock": true, 00:32:40.519 "num_base_bdevs": 4, 00:32:40.519 "num_base_bdevs_discovered": 2, 00:32:40.519 "num_base_bdevs_operational": 2, 00:32:40.519 "base_bdevs_list": [ 00:32:40.519 { 00:32:40.519 "name": null, 00:32:40.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.519 "is_configured": false, 00:32:40.519 "data_offset": 2048, 00:32:40.519 "data_size": 63488 00:32:40.519 }, 00:32:40.519 { 00:32:40.519 "name": null, 00:32:40.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.519 "is_configured": false, 00:32:40.519 "data_offset": 2048, 00:32:40.519 "data_size": 63488 00:32:40.519 }, 00:32:40.519 { 00:32:40.519 "name": "BaseBdev3", 00:32:40.519 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:40.519 "is_configured": true, 00:32:40.519 "data_offset": 2048, 00:32:40.519 "data_size": 63488 00:32:40.519 }, 00:32:40.519 { 00:32:40.519 "name": "BaseBdev4", 00:32:40.519 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:40.519 "is_configured": true, 00:32:40.519 "data_offset": 2048, 00:32:40.519 "data_size": 63488 00:32:40.519 } 00:32:40.519 ] 00:32:40.519 }' 00:32:40.519 08:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:40.519 08:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:41.453 "name": "raid_bdev1", 00:32:41.453 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:41.453 "strip_size_kb": 0, 00:32:41.453 "state": "online", 00:32:41.453 "raid_level": "raid1", 00:32:41.453 "superblock": true, 00:32:41.453 "num_base_bdevs": 4, 00:32:41.453 "num_base_bdevs_discovered": 2, 00:32:41.453 "num_base_bdevs_operational": 2, 00:32:41.453 "base_bdevs_list": [ 00:32:41.453 { 00:32:41.453 "name": null, 00:32:41.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.453 "is_configured": false, 00:32:41.453 "data_offset": 2048, 00:32:41.453 "data_size": 63488 00:32:41.453 }, 00:32:41.453 { 00:32:41.453 "name": null, 00:32:41.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.453 "is_configured": false, 00:32:41.453 "data_offset": 2048, 00:32:41.453 "data_size": 63488 00:32:41.453 }, 00:32:41.453 { 00:32:41.453 "name": "BaseBdev3", 00:32:41.453 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:41.453 "is_configured": true, 00:32:41.453 "data_offset": 2048, 00:32:41.453 "data_size": 63488 00:32:41.453 }, 00:32:41.453 { 00:32:41.453 "name": "BaseBdev4", 00:32:41.453 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:41.453 "is_configured": true, 00:32:41.453 "data_offset": 2048, 00:32:41.453 "data_size": 63488 00:32:41.453 } 00:32:41.453 ] 00:32:41.453 }' 00:32:41.453 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:41.711 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:41.971 [2024-07-12 08:59:16.983283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:41.971 [2024-07-12 08:59:16.983734] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:32:41.971 [2024-07-12 08:59:16.983855] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:41.971 request: 00:32:41.971 { 00:32:41.971 "base_bdev": "BaseBdev1", 00:32:41.971 "raid_bdev": "raid_bdev1", 00:32:41.971 "method": "bdev_raid_add_base_bdev", 00:32:41.971 "req_id": 1 00:32:41.971 } 00:32:41.971 Got JSON-RPC error response 00:32:41.971 response: 00:32:41.971 { 00:32:41.971 "code": -22, 00:32:41.971 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:41.971 } 00:32:41.971 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:32:41.971 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:41.971 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:41.971 08:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:41.971 08:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:42.908 08:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:42.908 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.908 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.166 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:43.166 "name": "raid_bdev1", 00:32:43.166 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:43.166 "strip_size_kb": 0, 00:32:43.166 "state": "online", 00:32:43.166 "raid_level": "raid1", 00:32:43.166 "superblock": true, 00:32:43.166 "num_base_bdevs": 4, 00:32:43.166 "num_base_bdevs_discovered": 2, 00:32:43.166 "num_base_bdevs_operational": 2, 00:32:43.166 "base_bdevs_list": [ 00:32:43.167 { 00:32:43.167 "name": null, 00:32:43.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.167 "is_configured": false, 00:32:43.167 "data_offset": 2048, 00:32:43.167 "data_size": 63488 00:32:43.167 }, 00:32:43.167 { 00:32:43.167 "name": null, 00:32:43.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.167 "is_configured": false, 00:32:43.167 "data_offset": 2048, 00:32:43.167 "data_size": 63488 00:32:43.167 }, 00:32:43.167 { 00:32:43.167 "name": "BaseBdev3", 00:32:43.167 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:43.167 "is_configured": true, 00:32:43.167 "data_offset": 2048, 00:32:43.167 "data_size": 63488 00:32:43.167 }, 00:32:43.167 { 00:32:43.167 "name": "BaseBdev4", 00:32:43.167 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:43.167 "is_configured": true, 00:32:43.167 "data_offset": 2048, 00:32:43.167 "data_size": 63488 00:32:43.167 } 00:32:43.167 ] 00:32:43.167 }' 00:32:43.167 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:43.167 08:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.103 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:44.103 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:44.103 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:44.103 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:44.103 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:44.104 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.104 08:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.104 08:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:44.104 "name": "raid_bdev1", 00:32:44.104 "uuid": "96d29f5e-09b8-4a40-a6de-037701bdaaec", 00:32:44.104 "strip_size_kb": 0, 00:32:44.104 "state": "online", 00:32:44.104 "raid_level": "raid1", 00:32:44.104 "superblock": true, 00:32:44.104 "num_base_bdevs": 4, 00:32:44.104 "num_base_bdevs_discovered": 2, 00:32:44.104 "num_base_bdevs_operational": 2, 00:32:44.104 "base_bdevs_list": [ 00:32:44.104 { 00:32:44.104 "name": null, 00:32:44.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.104 "is_configured": false, 00:32:44.104 "data_offset": 2048, 00:32:44.104 "data_size": 63488 00:32:44.104 }, 00:32:44.104 { 00:32:44.104 "name": null, 00:32:44.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.104 "is_configured": false, 00:32:44.104 "data_offset": 2048, 00:32:44.104 "data_size": 63488 00:32:44.104 }, 00:32:44.104 { 00:32:44.104 "name": "BaseBdev3", 00:32:44.104 "uuid": "f9b10c77-78d3-5aaf-8480-c9aa5de32a98", 00:32:44.104 "is_configured": true, 00:32:44.104 "data_offset": 2048, 00:32:44.104 "data_size": 63488 00:32:44.104 }, 00:32:44.104 { 00:32:44.104 "name": "BaseBdev4", 00:32:44.104 "uuid": "cdf02b49-6b6f-56f9-a5b2-2c64198ae1f1", 00:32:44.104 "is_configured": true, 00:32:44.104 "data_offset": 2048, 00:32:44.104 "data_size": 63488 00:32:44.104 } 00:32:44.104 ] 00:32:44.104 }' 00:32:44.104 08:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 149941 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 149941 ']' 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 149941 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149941 00:32:44.363 killing process with pid 149941 00:32:44.363 Received shutdown signal, test time was about 60.000000 seconds 00:32:44.363 00:32:44.363 Latency(us) 00:32:44.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.363 =================================================================================================================== 00:32:44.363 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149941' 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 149941 00:32:44.363 08:59:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 149941 00:32:44.363 [2024-07-12 08:59:19.410723] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:44.363 [2024-07-12 08:59:19.410883] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:44.363 [2024-07-12 08:59:19.410960] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:44.363 [2024-07-12 08:59:19.411012] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:32:44.622 [2024-07-12 08:59:19.795716] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:45.999 ************************************ 00:32:45.999 END TEST raid_rebuild_test_sb 00:32:45.999 ************************************ 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:32:45.999 00:32:45.999 real 0m41.701s 00:32:45.999 user 1m2.938s 00:32:45.999 sys 0m5.580s 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.999 08:59:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:45.999 08:59:20 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:32:45.999 08:59:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:45.999 08:59:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:45.999 08:59:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:45.999 ************************************ 00:32:45.999 START TEST raid_rebuild_test_io 00:32:45.999 ************************************ 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=150985 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 150985 /var/tmp/spdk-raid.sock 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 150985 ']' 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:45.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:45.999 08:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:45.999 [2024-07-12 08:59:20.985839] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:32:45.999 [2024-07-12 08:59:20.986271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150985 ] 00:32:45.999 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:45.999 Zero copy mechanism will not be used. 00:32:45.999 [2024-07-12 08:59:21.155945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.258 [2024-07-12 08:59:21.357091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.536 [2024-07-12 08:59:21.537172] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:46.805 08:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.805 08:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:32:46.806 08:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:46.806 08:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:47.099 BaseBdev1_malloc 00:32:47.099 08:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:47.370 [2024-07-12 08:59:22.450922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:47.370 [2024-07-12 08:59:22.451305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.370 [2024-07-12 08:59:22.451508] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:32:47.370 [2024-07-12 08:59:22.451628] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.371 [2024-07-12 08:59:22.454278] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.371 [2024-07-12 08:59:22.454443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:47.371 BaseBdev1 00:32:47.371 08:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:47.371 08:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:47.629 BaseBdev2_malloc 00:32:47.629 08:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:47.897 [2024-07-12 08:59:22.925763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:47.897 [2024-07-12 08:59:22.926207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.897 [2024-07-12 08:59:22.926373] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:32:47.897 [2024-07-12 08:59:22.926490] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.897 [2024-07-12 08:59:22.929021] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.897 [2024-07-12 08:59:22.929212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:47.897 BaseBdev2 00:32:47.897 08:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:47.897 08:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:48.169 BaseBdev3_malloc 00:32:48.169 08:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:48.427 [2024-07-12 08:59:23.393874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:48.427 [2024-07-12 08:59:23.394309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.427 [2024-07-12 08:59:23.394464] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:32:48.427 [2024-07-12 08:59:23.394586] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.427 [2024-07-12 08:59:23.397106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.427 [2024-07-12 08:59:23.397299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:48.427 BaseBdev3 00:32:48.427 08:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:48.427 08:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:48.686 BaseBdev4_malloc 00:32:48.686 08:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:48.944 [2024-07-12 08:59:23.914687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:48.944 [2024-07-12 08:59:23.915002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.944 [2024-07-12 08:59:23.915155] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:48.944 [2024-07-12 08:59:23.915274] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.944 [2024-07-12 08:59:23.917876] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.944 [2024-07-12 08:59:23.918065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:48.944 BaseBdev4 00:32:48.944 08:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:49.202 spare_malloc 00:32:49.202 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:49.460 spare_delay 00:32:49.460 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:49.720 [2024-07-12 08:59:24.667208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:49.720 [2024-07-12 08:59:24.667605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.720 [2024-07-12 08:59:24.667756] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:49.720 [2024-07-12 08:59:24.667882] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.720 [2024-07-12 08:59:24.670463] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.720 [2024-07-12 08:59:24.670658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:49.720 spare 00:32:49.720 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:32:49.720 [2024-07-12 08:59:24.903432] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:49.720 [2024-07-12 08:59:24.905915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:49.720 [2024-07-12 08:59:24.906152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:49.720 [2024-07-12 08:59:24.906365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:49.720 [2024-07-12 08:59:24.906642] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:32:49.720 [2024-07-12 08:59:24.906760] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:32:49.720 [2024-07-12 08:59:24.906978] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:49.720 [2024-07-12 08:59:24.907492] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:32:49.720 [2024-07-12 08:59:24.907626] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:32:49.720 [2024-07-12 08:59:24.907959] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.979 08:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.979 08:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:49.979 "name": "raid_bdev1", 00:32:49.979 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:49.979 "strip_size_kb": 0, 00:32:49.979 "state": "online", 00:32:49.979 "raid_level": "raid1", 00:32:49.979 "superblock": false, 00:32:49.979 "num_base_bdevs": 4, 00:32:49.979 "num_base_bdevs_discovered": 4, 00:32:49.979 "num_base_bdevs_operational": 4, 00:32:49.979 "base_bdevs_list": [ 00:32:49.979 { 00:32:49.979 "name": "BaseBdev1", 00:32:49.979 "uuid": "f01b9945-a078-565f-8a10-8684205cb6d4", 00:32:49.979 "is_configured": true, 00:32:49.979 "data_offset": 0, 00:32:49.979 "data_size": 65536 00:32:49.979 }, 00:32:49.979 { 00:32:49.979 "name": "BaseBdev2", 00:32:49.979 "uuid": "a982d6d6-3e20-5e46-855e-3605c4006d0c", 00:32:49.979 "is_configured": true, 00:32:49.979 "data_offset": 0, 00:32:49.979 "data_size": 65536 00:32:49.979 }, 00:32:49.979 { 00:32:49.979 "name": "BaseBdev3", 00:32:49.979 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:49.979 "is_configured": true, 00:32:49.979 "data_offset": 0, 00:32:49.979 "data_size": 65536 00:32:49.979 }, 00:32:49.979 { 00:32:49.979 "name": "BaseBdev4", 00:32:49.979 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:49.979 "is_configured": true, 00:32:49.979 "data_offset": 0, 00:32:49.979 "data_size": 65536 00:32:49.979 } 00:32:49.979 ] 00:32:49.979 }' 00:32:49.979 08:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:49.979 08:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:50.916 08:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:50.916 08:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:50.916 [2024-07-12 08:59:26.088525] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:50.916 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:32:50.916 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.916 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:32:51.484 [2024-07-12 08:59:26.487811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:32:51.484 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:51.484 Zero copy mechanism will not be used. 00:32:51.484 Running I/O for 60 seconds... 00:32:51.484 [2024-07-12 08:59:26.627848] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:51.484 [2024-07-12 08:59:26.628349] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.484 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.051 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.051 "name": "raid_bdev1", 00:32:52.051 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:52.051 "strip_size_kb": 0, 00:32:52.051 "state": "online", 00:32:52.051 "raid_level": "raid1", 00:32:52.051 "superblock": false, 00:32:52.052 "num_base_bdevs": 4, 00:32:52.052 "num_base_bdevs_discovered": 3, 00:32:52.052 "num_base_bdevs_operational": 3, 00:32:52.052 "base_bdevs_list": [ 00:32:52.052 { 00:32:52.052 "name": null, 00:32:52.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.052 "is_configured": false, 00:32:52.052 "data_offset": 0, 00:32:52.052 "data_size": 65536 00:32:52.052 }, 00:32:52.052 { 00:32:52.052 "name": "BaseBdev2", 00:32:52.052 "uuid": "a982d6d6-3e20-5e46-855e-3605c4006d0c", 00:32:52.052 "is_configured": true, 00:32:52.052 "data_offset": 0, 00:32:52.052 "data_size": 65536 00:32:52.052 }, 00:32:52.052 { 00:32:52.052 "name": "BaseBdev3", 00:32:52.052 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:52.052 "is_configured": true, 00:32:52.052 "data_offset": 0, 00:32:52.052 "data_size": 65536 00:32:52.052 }, 00:32:52.052 { 00:32:52.052 "name": "BaseBdev4", 00:32:52.052 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:52.052 "is_configured": true, 00:32:52.052 "data_offset": 0, 00:32:52.052 "data_size": 65536 00:32:52.052 } 00:32:52.052 ] 00:32:52.052 }' 00:32:52.052 08:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.052 08:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:52.620 08:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:52.879 [2024-07-12 08:59:27.843252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:52.879 08:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:52.879 [2024-07-12 08:59:27.911108] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:52.879 [2024-07-12 08:59:27.913565] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:52.879 [2024-07-12 08:59:28.039406] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:52.879 [2024-07-12 08:59:28.041090] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:53.137 [2024-07-12 08:59:28.270770] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:53.137 [2024-07-12 08:59:28.271878] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:53.703 [2024-07-12 08:59:28.601069] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:32:53.703 [2024-07-12 08:59:28.602793] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:32:53.703 [2024-07-12 08:59:28.807710] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:53.703 [2024-07-12 08:59:28.808851] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:53.962 08:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:53.962 08:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:53.962 08:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:53.962 08:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:53.962 08:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:53.962 08:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.962 08:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.221 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:54.221 "name": "raid_bdev1", 00:32:54.221 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:54.221 "strip_size_kb": 0, 00:32:54.221 "state": "online", 00:32:54.221 "raid_level": "raid1", 00:32:54.221 "superblock": false, 00:32:54.221 "num_base_bdevs": 4, 00:32:54.221 "num_base_bdevs_discovered": 4, 00:32:54.221 "num_base_bdevs_operational": 4, 00:32:54.221 "process": { 00:32:54.221 "type": "rebuild", 00:32:54.221 "target": "spare", 00:32:54.221 "progress": { 00:32:54.221 "blocks": 12288, 00:32:54.221 "percent": 18 00:32:54.221 } 00:32:54.221 }, 00:32:54.221 "base_bdevs_list": [ 00:32:54.221 { 00:32:54.221 "name": "spare", 00:32:54.221 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:32:54.221 "is_configured": true, 00:32:54.221 "data_offset": 0, 00:32:54.221 "data_size": 65536 00:32:54.221 }, 00:32:54.221 { 00:32:54.221 "name": "BaseBdev2", 00:32:54.221 "uuid": "a982d6d6-3e20-5e46-855e-3605c4006d0c", 00:32:54.221 "is_configured": true, 00:32:54.221 "data_offset": 0, 00:32:54.221 "data_size": 65536 00:32:54.221 }, 00:32:54.221 { 00:32:54.221 "name": "BaseBdev3", 00:32:54.221 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:54.221 "is_configured": true, 00:32:54.221 "data_offset": 0, 00:32:54.221 "data_size": 65536 00:32:54.221 }, 00:32:54.221 { 00:32:54.221 "name": "BaseBdev4", 00:32:54.221 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:54.221 "is_configured": true, 00:32:54.221 "data_offset": 0, 00:32:54.221 "data_size": 65536 00:32:54.221 } 00:32:54.221 ] 00:32:54.221 }' 00:32:54.221 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:54.221 [2024-07-12 08:59:29.186462] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:32:54.221 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:54.221 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:54.221 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:54.221 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:54.221 [2024-07-12 08:59:29.407032] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:32:54.221 [2024-07-12 08:59:29.407611] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:32:54.479 [2024-07-12 08:59:29.548274] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:54.479 [2024-07-12 08:59:29.631835] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:54.479 [2024-07-12 08:59:29.642656] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:54.479 [2024-07-12 08:59:29.642882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:54.479 [2024-07-12 08:59:29.642925] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:54.479 [2024-07-12 08:59:29.671461] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.738 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.997 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.997 "name": "raid_bdev1", 00:32:54.997 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:54.997 "strip_size_kb": 0, 00:32:54.997 "state": "online", 00:32:54.997 "raid_level": "raid1", 00:32:54.997 "superblock": false, 00:32:54.997 "num_base_bdevs": 4, 00:32:54.997 "num_base_bdevs_discovered": 3, 00:32:54.997 "num_base_bdevs_operational": 3, 00:32:54.997 "base_bdevs_list": [ 00:32:54.997 { 00:32:54.997 "name": null, 00:32:54.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.997 "is_configured": false, 00:32:54.997 "data_offset": 0, 00:32:54.997 "data_size": 65536 00:32:54.997 }, 00:32:54.997 { 00:32:54.997 "name": "BaseBdev2", 00:32:54.997 "uuid": "a982d6d6-3e20-5e46-855e-3605c4006d0c", 00:32:54.997 "is_configured": true, 00:32:54.997 "data_offset": 0, 00:32:54.997 "data_size": 65536 00:32:54.997 }, 00:32:54.997 { 00:32:54.997 "name": "BaseBdev3", 00:32:54.997 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:54.997 "is_configured": true, 00:32:54.997 "data_offset": 0, 00:32:54.997 "data_size": 65536 00:32:54.997 }, 00:32:54.997 { 00:32:54.997 "name": "BaseBdev4", 00:32:54.997 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:54.997 "is_configured": true, 00:32:54.997 "data_offset": 0, 00:32:54.997 "data_size": 65536 00:32:54.997 } 00:32:54.997 ] 00:32:54.997 }' 00:32:54.997 08:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.997 08:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:55.563 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:55.563 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:55.563 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:55.563 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:55.564 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:55.564 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.564 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.822 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:55.822 "name": "raid_bdev1", 00:32:55.822 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:55.822 "strip_size_kb": 0, 00:32:55.822 "state": "online", 00:32:55.822 "raid_level": "raid1", 00:32:55.822 "superblock": false, 00:32:55.822 "num_base_bdevs": 4, 00:32:55.822 "num_base_bdevs_discovered": 3, 00:32:55.822 "num_base_bdevs_operational": 3, 00:32:55.822 "base_bdevs_list": [ 00:32:55.822 { 00:32:55.822 "name": null, 00:32:55.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.822 "is_configured": false, 00:32:55.822 "data_offset": 0, 00:32:55.822 "data_size": 65536 00:32:55.822 }, 00:32:55.822 { 00:32:55.822 "name": "BaseBdev2", 00:32:55.822 "uuid": "a982d6d6-3e20-5e46-855e-3605c4006d0c", 00:32:55.822 "is_configured": true, 00:32:55.822 "data_offset": 0, 00:32:55.822 "data_size": 65536 00:32:55.822 }, 00:32:55.822 { 00:32:55.822 "name": "BaseBdev3", 00:32:55.822 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:55.822 "is_configured": true, 00:32:55.822 "data_offset": 0, 00:32:55.822 "data_size": 65536 00:32:55.822 }, 00:32:55.822 { 00:32:55.822 "name": "BaseBdev4", 00:32:55.822 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:55.822 "is_configured": true, 00:32:55.822 "data_offset": 0, 00:32:55.822 "data_size": 65536 00:32:55.822 } 00:32:55.822 ] 00:32:55.822 }' 00:32:55.822 08:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:56.081 08:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:56.081 08:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:56.081 08:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:56.081 08:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:56.340 [2024-07-12 08:59:31.347895] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:56.340 [2024-07-12 08:59:31.413284] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:32:56.340 [2024-07-12 08:59:31.415729] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:56.340 08:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:56.340 [2024-07-12 08:59:31.518028] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:56.340 [2024-07-12 08:59:31.519008] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:56.599 [2024-07-12 08:59:31.646629] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:56.600 [2024-07-12 08:59:31.647286] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:56.859 [2024-07-12 08:59:31.976395] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:32:56.859 [2024-07-12 08:59:31.977375] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:32:57.117 [2024-07-12 08:59:32.180575] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:57.117 [2024-07-12 08:59:32.181141] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:57.376 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:57.376 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:57.376 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:57.376 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:57.376 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:57.376 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.376 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.376 [2024-07-12 08:59:32.514754] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:32:57.635 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:57.635 "name": "raid_bdev1", 00:32:57.635 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:57.635 "strip_size_kb": 0, 00:32:57.635 "state": "online", 00:32:57.635 "raid_level": "raid1", 00:32:57.635 "superblock": false, 00:32:57.635 "num_base_bdevs": 4, 00:32:57.635 "num_base_bdevs_discovered": 4, 00:32:57.635 "num_base_bdevs_operational": 4, 00:32:57.635 "process": { 00:32:57.635 "type": "rebuild", 00:32:57.635 "target": "spare", 00:32:57.635 "progress": { 00:32:57.635 "blocks": 18432, 00:32:57.635 "percent": 28 00:32:57.635 } 00:32:57.635 }, 00:32:57.635 "base_bdevs_list": [ 00:32:57.635 { 00:32:57.635 "name": "spare", 00:32:57.635 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:32:57.635 "is_configured": true, 00:32:57.635 "data_offset": 0, 00:32:57.635 "data_size": 65536 00:32:57.635 }, 00:32:57.635 { 00:32:57.635 "name": "BaseBdev2", 00:32:57.635 "uuid": "a982d6d6-3e20-5e46-855e-3605c4006d0c", 00:32:57.635 "is_configured": true, 00:32:57.635 "data_offset": 0, 00:32:57.635 "data_size": 65536 00:32:57.635 }, 00:32:57.635 { 00:32:57.635 "name": "BaseBdev3", 00:32:57.635 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:57.635 "is_configured": true, 00:32:57.635 "data_offset": 0, 00:32:57.635 "data_size": 65536 00:32:57.635 }, 00:32:57.635 { 00:32:57.635 "name": "BaseBdev4", 00:32:57.636 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:57.636 "is_configured": true, 00:32:57.636 "data_offset": 0, 00:32:57.636 "data_size": 65536 00:32:57.636 } 00:32:57.636 ] 00:32:57.636 }' 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:57.636 [2024-07-12 08:59:32.722653] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:32:57.636 08:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:57.894 [2024-07-12 08:59:32.833704] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:32:57.894 [2024-07-12 08:59:33.005104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:57.894 [2024-07-12 08:59:33.049836] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:32:58.152 [2024-07-12 08:59:33.149794] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:32:58.152 [2024-07-12 08:59:33.150142] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:32:58.153 [2024-07-12 08:59:33.160902] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.153 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:58.411 "name": "raid_bdev1", 00:32:58.411 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:58.411 "strip_size_kb": 0, 00:32:58.411 "state": "online", 00:32:58.411 "raid_level": "raid1", 00:32:58.411 "superblock": false, 00:32:58.411 "num_base_bdevs": 4, 00:32:58.411 "num_base_bdevs_discovered": 3, 00:32:58.411 "num_base_bdevs_operational": 3, 00:32:58.411 "process": { 00:32:58.411 "type": "rebuild", 00:32:58.411 "target": "spare", 00:32:58.411 "progress": { 00:32:58.411 "blocks": 30720, 00:32:58.411 "percent": 46 00:32:58.411 } 00:32:58.411 }, 00:32:58.411 "base_bdevs_list": [ 00:32:58.411 { 00:32:58.411 "name": "spare", 00:32:58.411 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:32:58.411 "is_configured": true, 00:32:58.411 "data_offset": 0, 00:32:58.411 "data_size": 65536 00:32:58.411 }, 00:32:58.411 { 00:32:58.411 "name": null, 00:32:58.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.411 "is_configured": false, 00:32:58.411 "data_offset": 0, 00:32:58.411 "data_size": 65536 00:32:58.411 }, 00:32:58.411 { 00:32:58.411 "name": "BaseBdev3", 00:32:58.411 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:58.411 "is_configured": true, 00:32:58.411 "data_offset": 0, 00:32:58.411 "data_size": 65536 00:32:58.411 }, 00:32:58.411 { 00:32:58.411 "name": "BaseBdev4", 00:32:58.411 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:58.411 "is_configured": true, 00:32:58.411 "data_offset": 0, 00:32:58.411 "data_size": 65536 00:32:58.411 } 00:32:58.411 ] 00:32:58.411 }' 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:58.411 [2024-07-12 08:59:33.505707] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=1072 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.411 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.669 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:58.669 "name": "raid_bdev1", 00:32:58.669 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:32:58.669 "strip_size_kb": 0, 00:32:58.669 "state": "online", 00:32:58.669 "raid_level": "raid1", 00:32:58.669 "superblock": false, 00:32:58.669 "num_base_bdevs": 4, 00:32:58.669 "num_base_bdevs_discovered": 3, 00:32:58.669 "num_base_bdevs_operational": 3, 00:32:58.669 "process": { 00:32:58.669 "type": "rebuild", 00:32:58.669 "target": "spare", 00:32:58.669 "progress": { 00:32:58.669 "blocks": 36864, 00:32:58.669 "percent": 56 00:32:58.669 } 00:32:58.669 }, 00:32:58.669 "base_bdevs_list": [ 00:32:58.669 { 00:32:58.669 "name": "spare", 00:32:58.669 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:32:58.669 "is_configured": true, 00:32:58.669 "data_offset": 0, 00:32:58.669 "data_size": 65536 00:32:58.669 }, 00:32:58.669 { 00:32:58.669 "name": null, 00:32:58.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.669 "is_configured": false, 00:32:58.669 "data_offset": 0, 00:32:58.669 "data_size": 65536 00:32:58.669 }, 00:32:58.669 { 00:32:58.669 "name": "BaseBdev3", 00:32:58.669 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:32:58.669 "is_configured": true, 00:32:58.669 "data_offset": 0, 00:32:58.669 "data_size": 65536 00:32:58.669 }, 00:32:58.669 { 00:32:58.669 "name": "BaseBdev4", 00:32:58.669 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:32:58.669 "is_configured": true, 00:32:58.669 "data_offset": 0, 00:32:58.669 "data_size": 65536 00:32:58.669 } 00:32:58.669 ] 00:32:58.669 }' 00:32:58.669 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:58.669 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:58.670 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:58.928 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:58.928 08:59:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:59.188 [2024-07-12 08:59:34.134982] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:32:59.755 [2024-07-12 08:59:34.694602] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.755 08:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.014 [2024-07-12 08:59:35.030915] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:33:00.273 08:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:00.273 "name": "raid_bdev1", 00:33:00.273 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:33:00.273 "strip_size_kb": 0, 00:33:00.273 "state": "online", 00:33:00.273 "raid_level": "raid1", 00:33:00.273 "superblock": false, 00:33:00.273 "num_base_bdevs": 4, 00:33:00.273 "num_base_bdevs_discovered": 3, 00:33:00.273 "num_base_bdevs_operational": 3, 00:33:00.273 "process": { 00:33:00.273 "type": "rebuild", 00:33:00.273 "target": "spare", 00:33:00.273 "progress": { 00:33:00.273 "blocks": 57344, 00:33:00.273 "percent": 87 00:33:00.273 } 00:33:00.273 }, 00:33:00.273 "base_bdevs_list": [ 00:33:00.273 { 00:33:00.273 "name": "spare", 00:33:00.273 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:33:00.273 "is_configured": true, 00:33:00.273 "data_offset": 0, 00:33:00.273 "data_size": 65536 00:33:00.273 }, 00:33:00.273 { 00:33:00.273 "name": null, 00:33:00.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.273 "is_configured": false, 00:33:00.273 "data_offset": 0, 00:33:00.273 "data_size": 65536 00:33:00.273 }, 00:33:00.273 { 00:33:00.273 "name": "BaseBdev3", 00:33:00.273 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:33:00.273 "is_configured": true, 00:33:00.273 "data_offset": 0, 00:33:00.273 "data_size": 65536 00:33:00.273 }, 00:33:00.273 { 00:33:00.273 "name": "BaseBdev4", 00:33:00.273 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:33:00.273 "is_configured": true, 00:33:00.273 "data_offset": 0, 00:33:00.273 "data_size": 65536 00:33:00.273 } 00:33:00.273 ] 00:33:00.273 }' 00:33:00.273 08:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:00.273 [2024-07-12 08:59:35.250522] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:33:00.273 08:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.273 08:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:00.273 08:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.273 08:59:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:00.532 [2024-07-12 08:59:35.688956] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:00.791 [2024-07-12 08:59:35.795612] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:00.791 [2024-07-12 08:59:35.798766] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.359 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:01.618 "name": "raid_bdev1", 00:33:01.618 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:33:01.618 "strip_size_kb": 0, 00:33:01.618 "state": "online", 00:33:01.618 "raid_level": "raid1", 00:33:01.618 "superblock": false, 00:33:01.618 "num_base_bdevs": 4, 00:33:01.618 "num_base_bdevs_discovered": 3, 00:33:01.618 "num_base_bdevs_operational": 3, 00:33:01.618 "base_bdevs_list": [ 00:33:01.618 { 00:33:01.618 "name": "spare", 00:33:01.618 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:33:01.618 "is_configured": true, 00:33:01.618 "data_offset": 0, 00:33:01.618 "data_size": 65536 00:33:01.618 }, 00:33:01.618 { 00:33:01.618 "name": null, 00:33:01.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.618 "is_configured": false, 00:33:01.618 "data_offset": 0, 00:33:01.618 "data_size": 65536 00:33:01.618 }, 00:33:01.618 { 00:33:01.618 "name": "BaseBdev3", 00:33:01.618 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:33:01.618 "is_configured": true, 00:33:01.618 "data_offset": 0, 00:33:01.618 "data_size": 65536 00:33:01.618 }, 00:33:01.618 { 00:33:01.618 "name": "BaseBdev4", 00:33:01.618 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:33:01.618 "is_configured": true, 00:33:01.618 "data_offset": 0, 00:33:01.618 "data_size": 65536 00:33:01.618 } 00:33:01.618 ] 00:33:01.618 }' 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.618 08:59:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.877 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:01.877 "name": "raid_bdev1", 00:33:01.877 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:33:01.877 "strip_size_kb": 0, 00:33:01.877 "state": "online", 00:33:01.877 "raid_level": "raid1", 00:33:01.877 "superblock": false, 00:33:01.877 "num_base_bdevs": 4, 00:33:01.877 "num_base_bdevs_discovered": 3, 00:33:01.877 "num_base_bdevs_operational": 3, 00:33:01.877 "base_bdevs_list": [ 00:33:01.877 { 00:33:01.877 "name": "spare", 00:33:01.877 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:33:01.877 "is_configured": true, 00:33:01.877 "data_offset": 0, 00:33:01.877 "data_size": 65536 00:33:01.877 }, 00:33:01.877 { 00:33:01.877 "name": null, 00:33:01.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.877 "is_configured": false, 00:33:01.877 "data_offset": 0, 00:33:01.877 "data_size": 65536 00:33:01.877 }, 00:33:01.877 { 00:33:01.877 "name": "BaseBdev3", 00:33:01.877 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:33:01.877 "is_configured": true, 00:33:01.877 "data_offset": 0, 00:33:01.877 "data_size": 65536 00:33:01.877 }, 00:33:01.877 { 00:33:01.877 "name": "BaseBdev4", 00:33:01.877 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:33:01.877 "is_configured": true, 00:33:01.877 "data_offset": 0, 00:33:01.877 "data_size": 65536 00:33:01.877 } 00:33:01.877 ] 00:33:01.877 }' 00:33:01.877 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.136 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.395 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.395 "name": "raid_bdev1", 00:33:02.395 "uuid": "cd6af554-057c-4b9a-8e7e-e08836d4e623", 00:33:02.395 "strip_size_kb": 0, 00:33:02.395 "state": "online", 00:33:02.395 "raid_level": "raid1", 00:33:02.395 "superblock": false, 00:33:02.395 "num_base_bdevs": 4, 00:33:02.395 "num_base_bdevs_discovered": 3, 00:33:02.395 "num_base_bdevs_operational": 3, 00:33:02.395 "base_bdevs_list": [ 00:33:02.395 { 00:33:02.395 "name": "spare", 00:33:02.395 "uuid": "68ec742e-b716-511d-b989-b2a93a5ba752", 00:33:02.395 "is_configured": true, 00:33:02.395 "data_offset": 0, 00:33:02.395 "data_size": 65536 00:33:02.395 }, 00:33:02.395 { 00:33:02.395 "name": null, 00:33:02.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.395 "is_configured": false, 00:33:02.395 "data_offset": 0, 00:33:02.395 "data_size": 65536 00:33:02.395 }, 00:33:02.395 { 00:33:02.395 "name": "BaseBdev3", 00:33:02.395 "uuid": "393438a3-3258-52ae-84f5-a00b2b73f91d", 00:33:02.395 "is_configured": true, 00:33:02.395 "data_offset": 0, 00:33:02.395 "data_size": 65536 00:33:02.395 }, 00:33:02.395 { 00:33:02.395 "name": "BaseBdev4", 00:33:02.395 "uuid": "660fa378-3d82-5632-87a1-accbc1375877", 00:33:02.395 "is_configured": true, 00:33:02.395 "data_offset": 0, 00:33:02.395 "data_size": 65536 00:33:02.395 } 00:33:02.395 ] 00:33:02.395 }' 00:33:02.395 08:59:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.395 08:59:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:02.963 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:03.261 [2024-07-12 08:59:38.346794] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:03.261 [2024-07-12 08:59:38.347109] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:03.261 00:33:03.261 Latency(us) 00:33:03.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.261 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:03.261 raid_bdev1 : 11.95 91.14 273.43 0.00 0.00 15425.11 351.88 112483.61 00:33:03.261 =================================================================================================================== 00:33:03.261 Total : 91.14 273.43 0.00 0.00 15425.11 351.88 112483.61 00:33:03.531 [2024-07-12 08:59:38.457107] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.531 [2024-07-12 08:59:38.457386] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:03.531 0 00:33:03.532 [2024-07-12 08:59:38.457550] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:03.532 [2024-07-12 08:59:38.457569] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:33:03.532 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.532 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:03.790 08:59:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:33:04.049 /dev/nbd0 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:04.049 1+0 records in 00:33:04.049 1+0 records out 00:33:04.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500723 s, 8.2 MB/s 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:04.049 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:04.308 /dev/nbd1 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:04.308 1+0 records in 00:33:04.308 1+0 records out 00:33:04.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550637 s, 7.4 MB/s 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:04.308 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:04.568 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:04.568 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:04.568 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:04.568 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:04.568 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:04.568 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:04.568 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:04.828 08:59:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:05.087 /dev/nbd1 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:05.087 1+0 records in 00:33:05.087 1+0 records out 00:33:05.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040165 s, 10.2 MB/s 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:05.087 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:05.088 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:05.347 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 150985 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 150985 ']' 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 150985 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 150985 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 150985' 00:33:05.914 killing process with pid 150985 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 150985 00:33:05.914 Received shutdown signal, test time was about 14.381724 seconds 00:33:05.914 00:33:05.914 Latency(us) 00:33:05.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.914 =================================================================================================================== 00:33:05.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:05.914 08:59:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 150985 00:33:05.914 [2024-07-12 08:59:40.872392] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:06.173 [2024-07-12 08:59:41.205414] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:07.108 08:59:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:33:07.108 00:33:07.108 real 0m21.390s 00:33:07.108 user 0m33.814s 00:33:07.108 sys 0m2.703s 00:33:07.108 08:59:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:07.108 08:59:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:07.108 ************************************ 00:33:07.108 END TEST raid_rebuild_test_io 00:33:07.108 ************************************ 00:33:07.369 08:59:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:07.369 08:59:42 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:33:07.369 08:59:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:07.369 08:59:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.369 08:59:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:07.369 ************************************ 00:33:07.369 START TEST raid_rebuild_test_sb_io 00:33:07.369 ************************************ 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:07.369 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:33:07.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=151558 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 151558 /var/tmp/spdk-raid.sock 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 151558 ']' 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:07.370 08:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:07.370 [2024-07-12 08:59:42.431399] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:33:07.370 [2024-07-12 08:59:42.431980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151558 ] 00:33:07.370 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:07.370 Zero copy mechanism will not be used. 00:33:07.627 [2024-07-12 08:59:42.592278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.627 [2024-07-12 08:59:42.793447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.885 [2024-07-12 08:59:42.974189] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.452 08:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.452 08:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:33:08.452 08:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:08.452 08:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:08.452 BaseBdev1_malloc 00:33:08.452 08:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:08.710 [2024-07-12 08:59:43.841284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:08.710 [2024-07-12 08:59:43.841632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.710 [2024-07-12 08:59:43.841827] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:33:08.710 [2024-07-12 08:59:43.841942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.710 [2024-07-12 08:59:43.844454] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.710 [2024-07-12 08:59:43.844638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:08.710 BaseBdev1 00:33:08.710 08:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:08.710 08:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:08.969 BaseBdev2_malloc 00:33:08.969 08:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:09.227 [2024-07-12 08:59:44.341017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:09.227 [2024-07-12 08:59:44.341436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.227 [2024-07-12 08:59:44.341592] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:33:09.227 [2024-07-12 08:59:44.341707] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.227 [2024-07-12 08:59:44.344253] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.227 [2024-07-12 08:59:44.344454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:09.227 BaseBdev2 00:33:09.227 08:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:09.228 08:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:09.486 BaseBdev3_malloc 00:33:09.745 08:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:10.003 [2024-07-12 08:59:44.964941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:10.003 [2024-07-12 08:59:44.965333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:10.003 [2024-07-12 08:59:44.965486] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:33:10.003 [2024-07-12 08:59:44.965605] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:10.003 [2024-07-12 08:59:44.968128] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:10.003 [2024-07-12 08:59:44.968337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:10.003 BaseBdev3 00:33:10.003 08:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:10.003 08:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:10.261 BaseBdev4_malloc 00:33:10.261 08:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:10.261 [2024-07-12 08:59:45.433253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:10.261 [2024-07-12 08:59:45.433531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:10.261 [2024-07-12 08:59:45.433721] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:10.261 [2024-07-12 08:59:45.433886] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:10.261 [2024-07-12 08:59:45.436636] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:10.262 [2024-07-12 08:59:45.436812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:10.262 BaseBdev4 00:33:10.262 08:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:10.829 spare_malloc 00:33:10.829 08:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:10.829 spare_delay 00:33:10.829 08:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:11.087 [2024-07-12 08:59:46.149873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:11.087 [2024-07-12 08:59:46.150244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.087 [2024-07-12 08:59:46.150397] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:33:11.087 [2024-07-12 08:59:46.150526] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.087 [2024-07-12 08:59:46.153128] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.087 [2024-07-12 08:59:46.153319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:11.087 spare 00:33:11.087 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:33:11.346 [2024-07-12 08:59:46.362132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:11.346 [2024-07-12 08:59:46.364468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:11.346 [2024-07-12 08:59:46.364697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:11.346 [2024-07-12 08:59:46.364880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:11.346 [2024-07-12 08:59:46.365242] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:33:11.346 [2024-07-12 08:59:46.365386] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:11.346 [2024-07-12 08:59:46.365566] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:11.346 [2024-07-12 08:59:46.366141] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:33:11.346 [2024-07-12 08:59:46.366267] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:33:11.346 [2024-07-12 08:59:46.366601] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.346 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.605 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:11.605 "name": "raid_bdev1", 00:33:11.605 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:11.605 "strip_size_kb": 0, 00:33:11.605 "state": "online", 00:33:11.605 "raid_level": "raid1", 00:33:11.605 "superblock": true, 00:33:11.605 "num_base_bdevs": 4, 00:33:11.605 "num_base_bdevs_discovered": 4, 00:33:11.605 "num_base_bdevs_operational": 4, 00:33:11.605 "base_bdevs_list": [ 00:33:11.605 { 00:33:11.605 "name": "BaseBdev1", 00:33:11.605 "uuid": "2fbca38c-3638-501d-b5c9-1616a8e57454", 00:33:11.605 "is_configured": true, 00:33:11.605 "data_offset": 2048, 00:33:11.605 "data_size": 63488 00:33:11.605 }, 00:33:11.605 { 00:33:11.605 "name": "BaseBdev2", 00:33:11.605 "uuid": "7471c674-1ae3-542e-989c-5300aafc6e3c", 00:33:11.605 "is_configured": true, 00:33:11.605 "data_offset": 2048, 00:33:11.605 "data_size": 63488 00:33:11.605 }, 00:33:11.605 { 00:33:11.605 "name": "BaseBdev3", 00:33:11.605 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:11.605 "is_configured": true, 00:33:11.605 "data_offset": 2048, 00:33:11.605 "data_size": 63488 00:33:11.605 }, 00:33:11.605 { 00:33:11.605 "name": "BaseBdev4", 00:33:11.605 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:11.605 "is_configured": true, 00:33:11.605 "data_offset": 2048, 00:33:11.605 "data_size": 63488 00:33:11.605 } 00:33:11.605 ] 00:33:11.605 }' 00:33:11.605 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:11.605 08:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:12.172 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:12.172 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:12.430 [2024-07-12 08:59:47.535143] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:12.431 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:33:12.431 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.431 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:12.689 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:33:12.689 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:33:12.689 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:12.689 08:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:33:12.689 [2024-07-12 08:59:47.874404] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:33:12.689 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:12.689 Zero copy mechanism will not be used. 00:33:12.689 Running I/O for 60 seconds... 00:33:12.947 [2024-07-12 08:59:48.036226] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:12.948 [2024-07-12 08:59:48.044072] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.948 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.206 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.206 "name": "raid_bdev1", 00:33:13.206 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:13.206 "strip_size_kb": 0, 00:33:13.206 "state": "online", 00:33:13.206 "raid_level": "raid1", 00:33:13.206 "superblock": true, 00:33:13.206 "num_base_bdevs": 4, 00:33:13.206 "num_base_bdevs_discovered": 3, 00:33:13.206 "num_base_bdevs_operational": 3, 00:33:13.206 "base_bdevs_list": [ 00:33:13.206 { 00:33:13.206 "name": null, 00:33:13.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.206 "is_configured": false, 00:33:13.206 "data_offset": 2048, 00:33:13.206 "data_size": 63488 00:33:13.206 }, 00:33:13.206 { 00:33:13.206 "name": "BaseBdev2", 00:33:13.206 "uuid": "7471c674-1ae3-542e-989c-5300aafc6e3c", 00:33:13.206 "is_configured": true, 00:33:13.206 "data_offset": 2048, 00:33:13.206 "data_size": 63488 00:33:13.206 }, 00:33:13.206 { 00:33:13.206 "name": "BaseBdev3", 00:33:13.206 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:13.206 "is_configured": true, 00:33:13.206 "data_offset": 2048, 00:33:13.206 "data_size": 63488 00:33:13.206 }, 00:33:13.206 { 00:33:13.206 "name": "BaseBdev4", 00:33:13.206 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:13.206 "is_configured": true, 00:33:13.206 "data_offset": 2048, 00:33:13.206 "data_size": 63488 00:33:13.206 } 00:33:13.206 ] 00:33:13.206 }' 00:33:13.206 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.206 08:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:14.141 08:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:14.142 [2024-07-12 08:59:49.276573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:14.400 08:59:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:14.400 [2024-07-12 08:59:49.343547] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:14.400 [2024-07-12 08:59:49.345820] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:14.400 [2024-07-12 08:59:49.456029] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:14.400 [2024-07-12 08:59:49.457020] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:14.659 [2024-07-12 08:59:49.684367] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:14.918 [2024-07-12 08:59:49.942218] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:15.177 [2024-07-12 08:59:50.172155] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:15.177 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:15.177 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:15.177 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:15.177 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:15.177 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:15.177 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.177 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.436 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:15.436 "name": "raid_bdev1", 00:33:15.436 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:15.436 "strip_size_kb": 0, 00:33:15.436 "state": "online", 00:33:15.436 "raid_level": "raid1", 00:33:15.436 "superblock": true, 00:33:15.436 "num_base_bdevs": 4, 00:33:15.436 "num_base_bdevs_discovered": 4, 00:33:15.436 "num_base_bdevs_operational": 4, 00:33:15.436 "process": { 00:33:15.436 "type": "rebuild", 00:33:15.436 "target": "spare", 00:33:15.436 "progress": { 00:33:15.436 "blocks": 14336, 00:33:15.436 "percent": 22 00:33:15.436 } 00:33:15.436 }, 00:33:15.436 "base_bdevs_list": [ 00:33:15.436 { 00:33:15.436 "name": "spare", 00:33:15.436 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:15.436 "is_configured": true, 00:33:15.436 "data_offset": 2048, 00:33:15.436 "data_size": 63488 00:33:15.436 }, 00:33:15.436 { 00:33:15.436 "name": "BaseBdev2", 00:33:15.436 "uuid": "7471c674-1ae3-542e-989c-5300aafc6e3c", 00:33:15.436 "is_configured": true, 00:33:15.436 "data_offset": 2048, 00:33:15.436 "data_size": 63488 00:33:15.436 }, 00:33:15.436 { 00:33:15.436 "name": "BaseBdev3", 00:33:15.436 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:15.436 "is_configured": true, 00:33:15.436 "data_offset": 2048, 00:33:15.436 "data_size": 63488 00:33:15.436 }, 00:33:15.436 { 00:33:15.436 "name": "BaseBdev4", 00:33:15.436 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:15.436 "is_configured": true, 00:33:15.436 "data_offset": 2048, 00:33:15.436 "data_size": 63488 00:33:15.436 } 00:33:15.436 ] 00:33:15.436 }' 00:33:15.436 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:15.695 [2024-07-12 08:59:50.641373] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:15.695 [2024-07-12 08:59:50.642011] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:15.695 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:15.695 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:15.695 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:15.695 08:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:15.954 [2024-07-12 08:59:50.928865] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:15.954 [2024-07-12 08:59:50.989033] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:15.954 [2024-07-12 08:59:51.147258] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:16.213 [2024-07-12 08:59:51.151554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.213 [2024-07-12 08:59:51.151790] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:16.213 [2024-07-12 08:59:51.151834] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:16.213 [2024-07-12 08:59:51.180929] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.213 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.472 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:16.472 "name": "raid_bdev1", 00:33:16.472 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:16.472 "strip_size_kb": 0, 00:33:16.472 "state": "online", 00:33:16.472 "raid_level": "raid1", 00:33:16.472 "superblock": true, 00:33:16.472 "num_base_bdevs": 4, 00:33:16.472 "num_base_bdevs_discovered": 3, 00:33:16.472 "num_base_bdevs_operational": 3, 00:33:16.472 "base_bdevs_list": [ 00:33:16.472 { 00:33:16.472 "name": null, 00:33:16.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.472 "is_configured": false, 00:33:16.472 "data_offset": 2048, 00:33:16.472 "data_size": 63488 00:33:16.472 }, 00:33:16.472 { 00:33:16.472 "name": "BaseBdev2", 00:33:16.472 "uuid": "7471c674-1ae3-542e-989c-5300aafc6e3c", 00:33:16.472 "is_configured": true, 00:33:16.472 "data_offset": 2048, 00:33:16.472 "data_size": 63488 00:33:16.472 }, 00:33:16.472 { 00:33:16.472 "name": "BaseBdev3", 00:33:16.472 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:16.472 "is_configured": true, 00:33:16.472 "data_offset": 2048, 00:33:16.472 "data_size": 63488 00:33:16.472 }, 00:33:16.472 { 00:33:16.472 "name": "BaseBdev4", 00:33:16.472 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:16.472 "is_configured": true, 00:33:16.472 "data_offset": 2048, 00:33:16.472 "data_size": 63488 00:33:16.472 } 00:33:16.472 ] 00:33:16.472 }' 00:33:16.472 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:16.472 08:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:17.038 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:17.038 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:17.038 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:17.038 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:17.038 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:17.038 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.038 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.603 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:17.603 "name": "raid_bdev1", 00:33:17.603 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:17.603 "strip_size_kb": 0, 00:33:17.603 "state": "online", 00:33:17.603 "raid_level": "raid1", 00:33:17.603 "superblock": true, 00:33:17.603 "num_base_bdevs": 4, 00:33:17.603 "num_base_bdevs_discovered": 3, 00:33:17.603 "num_base_bdevs_operational": 3, 00:33:17.603 "base_bdevs_list": [ 00:33:17.603 { 00:33:17.603 "name": null, 00:33:17.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.604 "is_configured": false, 00:33:17.604 "data_offset": 2048, 00:33:17.604 "data_size": 63488 00:33:17.604 }, 00:33:17.604 { 00:33:17.604 "name": "BaseBdev2", 00:33:17.604 "uuid": "7471c674-1ae3-542e-989c-5300aafc6e3c", 00:33:17.604 "is_configured": true, 00:33:17.604 "data_offset": 2048, 00:33:17.604 "data_size": 63488 00:33:17.604 }, 00:33:17.604 { 00:33:17.604 "name": "BaseBdev3", 00:33:17.604 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:17.604 "is_configured": true, 00:33:17.604 "data_offset": 2048, 00:33:17.604 "data_size": 63488 00:33:17.604 }, 00:33:17.604 { 00:33:17.604 "name": "BaseBdev4", 00:33:17.604 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:17.604 "is_configured": true, 00:33:17.604 "data_offset": 2048, 00:33:17.604 "data_size": 63488 00:33:17.604 } 00:33:17.604 ] 00:33:17.604 }' 00:33:17.604 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:17.604 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:17.604 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:17.604 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:17.604 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:17.861 [2024-07-12 08:59:52.875698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:17.861 08:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:17.861 [2024-07-12 08:59:52.942277] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:33:17.861 [2024-07-12 08:59:52.944740] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:17.861 [2024-07-12 08:59:53.055707] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:17.861 [2024-07-12 08:59:53.056705] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:18.119 [2024-07-12 08:59:53.277647] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:18.119 [2024-07-12 08:59:53.278735] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:18.685 [2024-07-12 08:59:53.640386] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:18.685 [2024-07-12 08:59:53.758515] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:18.685 [2024-07-12 08:59:53.759616] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:18.943 08:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:18.943 08:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:18.944 08:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:18.944 08:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:18.944 08:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:18.944 08:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.944 08:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.944 [2024-07-12 08:59:54.109781] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:19.202 "name": "raid_bdev1", 00:33:19.202 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:19.202 "strip_size_kb": 0, 00:33:19.202 "state": "online", 00:33:19.202 "raid_level": "raid1", 00:33:19.202 "superblock": true, 00:33:19.202 "num_base_bdevs": 4, 00:33:19.202 "num_base_bdevs_discovered": 4, 00:33:19.202 "num_base_bdevs_operational": 4, 00:33:19.202 "process": { 00:33:19.202 "type": "rebuild", 00:33:19.202 "target": "spare", 00:33:19.202 "progress": { 00:33:19.202 "blocks": 14336, 00:33:19.202 "percent": 22 00:33:19.202 } 00:33:19.202 }, 00:33:19.202 "base_bdevs_list": [ 00:33:19.202 { 00:33:19.202 "name": "spare", 00:33:19.202 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:19.202 "is_configured": true, 00:33:19.202 "data_offset": 2048, 00:33:19.202 "data_size": 63488 00:33:19.202 }, 00:33:19.202 { 00:33:19.202 "name": "BaseBdev2", 00:33:19.202 "uuid": "7471c674-1ae3-542e-989c-5300aafc6e3c", 00:33:19.202 "is_configured": true, 00:33:19.202 "data_offset": 2048, 00:33:19.202 "data_size": 63488 00:33:19.202 }, 00:33:19.202 { 00:33:19.202 "name": "BaseBdev3", 00:33:19.202 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:19.202 "is_configured": true, 00:33:19.202 "data_offset": 2048, 00:33:19.202 "data_size": 63488 00:33:19.202 }, 00:33:19.202 { 00:33:19.202 "name": "BaseBdev4", 00:33:19.202 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:19.202 "is_configured": true, 00:33:19.202 "data_offset": 2048, 00:33:19.202 "data_size": 63488 00:33:19.202 } 00:33:19.202 ] 00:33:19.202 }' 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:33:19.202 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:33:19.202 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:19.202 [2024-07-12 08:59:54.331077] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:19.202 [2024-07-12 08:59:54.331701] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:19.460 [2024-07-12 08:59:54.515110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:19.718 [2024-07-12 08:59:54.771570] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:33:19.718 [2024-07-12 08:59:54.771941] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.718 08:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.718 [2024-07-12 08:59:54.909475] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:19.978 "name": "raid_bdev1", 00:33:19.978 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:19.978 "strip_size_kb": 0, 00:33:19.978 "state": "online", 00:33:19.978 "raid_level": "raid1", 00:33:19.978 "superblock": true, 00:33:19.978 "num_base_bdevs": 4, 00:33:19.978 "num_base_bdevs_discovered": 3, 00:33:19.978 "num_base_bdevs_operational": 3, 00:33:19.978 "process": { 00:33:19.978 "type": "rebuild", 00:33:19.978 "target": "spare", 00:33:19.978 "progress": { 00:33:19.978 "blocks": 20480, 00:33:19.978 "percent": 32 00:33:19.978 } 00:33:19.978 }, 00:33:19.978 "base_bdevs_list": [ 00:33:19.978 { 00:33:19.978 "name": "spare", 00:33:19.978 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:19.978 "is_configured": true, 00:33:19.978 "data_offset": 2048, 00:33:19.978 "data_size": 63488 00:33:19.978 }, 00:33:19.978 { 00:33:19.978 "name": null, 00:33:19.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:19.978 "is_configured": false, 00:33:19.978 "data_offset": 2048, 00:33:19.978 "data_size": 63488 00:33:19.978 }, 00:33:19.978 { 00:33:19.978 "name": "BaseBdev3", 00:33:19.978 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:19.978 "is_configured": true, 00:33:19.978 "data_offset": 2048, 00:33:19.978 "data_size": 63488 00:33:19.978 }, 00:33:19.978 { 00:33:19.978 "name": "BaseBdev4", 00:33:19.978 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:19.978 "is_configured": true, 00:33:19.978 "data_offset": 2048, 00:33:19.978 "data_size": 63488 00:33:19.978 } 00:33:19.978 ] 00:33:19.978 }' 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:19.978 [2024-07-12 08:59:55.035907] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=1094 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.978 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.237 [2024-07-12 08:59:55.259126] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:33:20.237 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:20.237 "name": "raid_bdev1", 00:33:20.237 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:20.237 "strip_size_kb": 0, 00:33:20.237 "state": "online", 00:33:20.237 "raid_level": "raid1", 00:33:20.237 "superblock": true, 00:33:20.237 "num_base_bdevs": 4, 00:33:20.237 "num_base_bdevs_discovered": 3, 00:33:20.237 "num_base_bdevs_operational": 3, 00:33:20.237 "process": { 00:33:20.237 "type": "rebuild", 00:33:20.237 "target": "spare", 00:33:20.237 "progress": { 00:33:20.237 "blocks": 26624, 00:33:20.237 "percent": 41 00:33:20.237 } 00:33:20.237 }, 00:33:20.237 "base_bdevs_list": [ 00:33:20.237 { 00:33:20.237 "name": "spare", 00:33:20.237 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:20.237 "is_configured": true, 00:33:20.237 "data_offset": 2048, 00:33:20.237 "data_size": 63488 00:33:20.237 }, 00:33:20.237 { 00:33:20.237 "name": null, 00:33:20.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.237 "is_configured": false, 00:33:20.237 "data_offset": 2048, 00:33:20.237 "data_size": 63488 00:33:20.237 }, 00:33:20.237 { 00:33:20.237 "name": "BaseBdev3", 00:33:20.237 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:20.237 "is_configured": true, 00:33:20.237 "data_offset": 2048, 00:33:20.237 "data_size": 63488 00:33:20.237 }, 00:33:20.237 { 00:33:20.237 "name": "BaseBdev4", 00:33:20.237 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:20.237 "is_configured": true, 00:33:20.237 "data_offset": 2048, 00:33:20.237 "data_size": 63488 00:33:20.237 } 00:33:20.237 ] 00:33:20.237 }' 00:33:20.237 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:20.237 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.496 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:20.496 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.496 08:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:20.496 [2024-07-12 08:59:55.653903] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:33:20.755 [2024-07-12 08:59:55.871900] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:33:21.014 [2024-07-12 08:59:56.197778] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:21.582 "name": "raid_bdev1", 00:33:21.582 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:21.582 "strip_size_kb": 0, 00:33:21.582 "state": "online", 00:33:21.582 "raid_level": "raid1", 00:33:21.582 "superblock": true, 00:33:21.582 "num_base_bdevs": 4, 00:33:21.582 "num_base_bdevs_discovered": 3, 00:33:21.582 "num_base_bdevs_operational": 3, 00:33:21.582 "process": { 00:33:21.582 "type": "rebuild", 00:33:21.582 "target": "spare", 00:33:21.582 "progress": { 00:33:21.582 "blocks": 45056, 00:33:21.582 "percent": 70 00:33:21.582 } 00:33:21.582 }, 00:33:21.582 "base_bdevs_list": [ 00:33:21.582 { 00:33:21.582 "name": "spare", 00:33:21.582 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:21.582 "is_configured": true, 00:33:21.582 "data_offset": 2048, 00:33:21.582 "data_size": 63488 00:33:21.582 }, 00:33:21.582 { 00:33:21.582 "name": null, 00:33:21.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.582 "is_configured": false, 00:33:21.582 "data_offset": 2048, 00:33:21.582 "data_size": 63488 00:33:21.582 }, 00:33:21.582 { 00:33:21.582 "name": "BaseBdev3", 00:33:21.582 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:21.582 "is_configured": true, 00:33:21.582 "data_offset": 2048, 00:33:21.582 "data_size": 63488 00:33:21.582 }, 00:33:21.582 { 00:33:21.582 "name": "BaseBdev4", 00:33:21.582 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:21.582 "is_configured": true, 00:33:21.582 "data_offset": 2048, 00:33:21.582 "data_size": 63488 00:33:21.582 } 00:33:21.582 ] 00:33:21.582 }' 00:33:21.582 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:21.841 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.841 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:21.841 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.841 08:59:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:22.100 [2024-07-12 08:59:57.087139] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:33:22.668 [2024-07-12 08:59:57.753673] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.668 08:59:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.668 [2024-07-12 08:59:57.853722] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:22.668 [2024-07-12 08:59:57.856879] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:22.928 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:22.928 "name": "raid_bdev1", 00:33:22.928 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:22.928 "strip_size_kb": 0, 00:33:22.928 "state": "online", 00:33:22.928 "raid_level": "raid1", 00:33:22.928 "superblock": true, 00:33:22.928 "num_base_bdevs": 4, 00:33:22.928 "num_base_bdevs_discovered": 3, 00:33:22.928 "num_base_bdevs_operational": 3, 00:33:22.928 "base_bdevs_list": [ 00:33:22.928 { 00:33:22.928 "name": "spare", 00:33:22.928 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:22.928 "is_configured": true, 00:33:22.928 "data_offset": 2048, 00:33:22.928 "data_size": 63488 00:33:22.928 }, 00:33:22.928 { 00:33:22.928 "name": null, 00:33:22.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.928 "is_configured": false, 00:33:22.928 "data_offset": 2048, 00:33:22.928 "data_size": 63488 00:33:22.928 }, 00:33:22.928 { 00:33:22.928 "name": "BaseBdev3", 00:33:22.928 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:22.928 "is_configured": true, 00:33:22.928 "data_offset": 2048, 00:33:22.928 "data_size": 63488 00:33:22.928 }, 00:33:22.928 { 00:33:22.928 "name": "BaseBdev4", 00:33:22.928 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:22.928 "is_configured": true, 00:33:22.928 "data_offset": 2048, 00:33:22.928 "data_size": 63488 00:33:22.928 } 00:33:22.928 ] 00:33:22.928 }' 00:33:22.928 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:23.187 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.188 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.188 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.447 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:23.447 "name": "raid_bdev1", 00:33:23.447 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:23.447 "strip_size_kb": 0, 00:33:23.447 "state": "online", 00:33:23.447 "raid_level": "raid1", 00:33:23.447 "superblock": true, 00:33:23.447 "num_base_bdevs": 4, 00:33:23.447 "num_base_bdevs_discovered": 3, 00:33:23.447 "num_base_bdevs_operational": 3, 00:33:23.447 "base_bdevs_list": [ 00:33:23.447 { 00:33:23.447 "name": "spare", 00:33:23.447 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:23.447 "is_configured": true, 00:33:23.447 "data_offset": 2048, 00:33:23.447 "data_size": 63488 00:33:23.447 }, 00:33:23.447 { 00:33:23.447 "name": null, 00:33:23.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.447 "is_configured": false, 00:33:23.447 "data_offset": 2048, 00:33:23.447 "data_size": 63488 00:33:23.447 }, 00:33:23.447 { 00:33:23.447 "name": "BaseBdev3", 00:33:23.447 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:23.447 "is_configured": true, 00:33:23.447 "data_offset": 2048, 00:33:23.447 "data_size": 63488 00:33:23.447 }, 00:33:23.447 { 00:33:23.447 "name": "BaseBdev4", 00:33:23.447 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:23.447 "is_configured": true, 00:33:23.447 "data_offset": 2048, 00:33:23.447 "data_size": 63488 00:33:23.447 } 00:33:23.447 ] 00:33:23.447 }' 00:33:23.447 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:23.447 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:23.447 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:23.705 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:23.706 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:23.706 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:23.706 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.706 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.964 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:23.964 "name": "raid_bdev1", 00:33:23.964 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:23.964 "strip_size_kb": 0, 00:33:23.964 "state": "online", 00:33:23.964 "raid_level": "raid1", 00:33:23.964 "superblock": true, 00:33:23.964 "num_base_bdevs": 4, 00:33:23.964 "num_base_bdevs_discovered": 3, 00:33:23.964 "num_base_bdevs_operational": 3, 00:33:23.964 "base_bdevs_list": [ 00:33:23.964 { 00:33:23.964 "name": "spare", 00:33:23.964 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:23.964 "is_configured": true, 00:33:23.964 "data_offset": 2048, 00:33:23.964 "data_size": 63488 00:33:23.964 }, 00:33:23.964 { 00:33:23.964 "name": null, 00:33:23.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.964 "is_configured": false, 00:33:23.964 "data_offset": 2048, 00:33:23.964 "data_size": 63488 00:33:23.964 }, 00:33:23.964 { 00:33:23.964 "name": "BaseBdev3", 00:33:23.964 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:23.964 "is_configured": true, 00:33:23.964 "data_offset": 2048, 00:33:23.964 "data_size": 63488 00:33:23.964 }, 00:33:23.964 { 00:33:23.964 "name": "BaseBdev4", 00:33:23.964 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:23.964 "is_configured": true, 00:33:23.964 "data_offset": 2048, 00:33:23.964 "data_size": 63488 00:33:23.964 } 00:33:23.964 ] 00:33:23.964 }' 00:33:23.964 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:23.964 08:59:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:24.542 08:59:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:24.851 [2024-07-12 08:59:59.939277] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:24.851 [2024-07-12 08:59:59.939569] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:24.851 00:33:24.851 Latency(us) 00:33:24.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.851 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:24.851 raid_bdev1 : 12.16 90.44 271.33 0.00 0.00 14973.51 350.02 118203.11 00:33:24.851 =================================================================================================================== 00:33:24.851 Total : 90.44 271.33 0.00 0.00 14973.51 350.02 118203.11 00:33:25.117 [2024-07-12 09:00:00.056816] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:25.117 [2024-07-12 09:00:00.057061] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:25.117 0 00:33:25.117 [2024-07-12 09:00:00.057224] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:25.117 [2024-07-12 09:00:00.057243] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:33:25.117 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.117 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:25.375 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:25.376 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:25.376 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:25.376 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:25.376 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:25.376 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:33:25.635 /dev/nbd0 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.635 1+0 records in 00:33:25.635 1+0 records out 00:33:25.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542964 s, 7.5 MB/s 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:25.635 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:25.894 /dev/nbd1 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.894 1+0 records in 00:33:25.894 1+0 records out 00:33:25.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460693 s, 8.9 MB/s 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:25.894 09:00:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:25.894 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:25.894 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:25.894 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:25.894 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:25.894 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:25.894 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:25.894 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:26.462 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:26.721 /dev/nbd1 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:26.721 1+0 records in 00:33:26.721 1+0 records out 00:33:26.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342784 s, 11.9 MB/s 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.721 09:00:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.978 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:33:27.235 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:27.799 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:27.799 [2024-07-12 09:00:02.897965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:27.799 [2024-07-12 09:00:02.898338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.799 [2024-07-12 09:00:02.898448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:33:27.799 [2024-07-12 09:00:02.898680] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.800 [2024-07-12 09:00:02.901230] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.800 [2024-07-12 09:00:02.901427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:27.800 [2024-07-12 09:00:02.901664] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:27.800 [2024-07-12 09:00:02.901829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:27.800 [2024-07-12 09:00:02.902160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:27.800 [2024-07-12 09:00:02.902490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:27.800 spare 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.800 09:00:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.058 [2024-07-12 09:00:03.002736] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:33:28.058 [2024-07-12 09:00:03.003052] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:28.058 [2024-07-12 09:00:03.003286] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a3c0 00:33:28.058 [2024-07-12 09:00:03.003872] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:33:28.058 [2024-07-12 09:00:03.004030] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:33:28.058 [2024-07-12 09:00:03.004319] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:28.058 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:28.058 "name": "raid_bdev1", 00:33:28.058 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:28.058 "strip_size_kb": 0, 00:33:28.058 "state": "online", 00:33:28.058 "raid_level": "raid1", 00:33:28.058 "superblock": true, 00:33:28.058 "num_base_bdevs": 4, 00:33:28.058 "num_base_bdevs_discovered": 3, 00:33:28.058 "num_base_bdevs_operational": 3, 00:33:28.058 "base_bdevs_list": [ 00:33:28.058 { 00:33:28.058 "name": "spare", 00:33:28.058 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:28.058 "is_configured": true, 00:33:28.058 "data_offset": 2048, 00:33:28.058 "data_size": 63488 00:33:28.058 }, 00:33:28.058 { 00:33:28.058 "name": null, 00:33:28.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.058 "is_configured": false, 00:33:28.058 "data_offset": 2048, 00:33:28.058 "data_size": 63488 00:33:28.058 }, 00:33:28.058 { 00:33:28.058 "name": "BaseBdev3", 00:33:28.058 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:28.058 "is_configured": true, 00:33:28.058 "data_offset": 2048, 00:33:28.058 "data_size": 63488 00:33:28.058 }, 00:33:28.058 { 00:33:28.058 "name": "BaseBdev4", 00:33:28.058 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:28.058 "is_configured": true, 00:33:28.058 "data_offset": 2048, 00:33:28.058 "data_size": 63488 00:33:28.058 } 00:33:28.058 ] 00:33:28.058 }' 00:33:28.058 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:28.058 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:28.994 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:28.994 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:28.994 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:28.994 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:28.994 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:28.994 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.994 09:00:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.253 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:29.253 "name": "raid_bdev1", 00:33:29.253 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:29.253 "strip_size_kb": 0, 00:33:29.253 "state": "online", 00:33:29.253 "raid_level": "raid1", 00:33:29.253 "superblock": true, 00:33:29.253 "num_base_bdevs": 4, 00:33:29.253 "num_base_bdevs_discovered": 3, 00:33:29.253 "num_base_bdevs_operational": 3, 00:33:29.253 "base_bdevs_list": [ 00:33:29.253 { 00:33:29.253 "name": "spare", 00:33:29.253 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:29.253 "is_configured": true, 00:33:29.253 "data_offset": 2048, 00:33:29.253 "data_size": 63488 00:33:29.253 }, 00:33:29.253 { 00:33:29.253 "name": null, 00:33:29.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.253 "is_configured": false, 00:33:29.253 "data_offset": 2048, 00:33:29.253 "data_size": 63488 00:33:29.253 }, 00:33:29.253 { 00:33:29.253 "name": "BaseBdev3", 00:33:29.253 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:29.253 "is_configured": true, 00:33:29.253 "data_offset": 2048, 00:33:29.253 "data_size": 63488 00:33:29.253 }, 00:33:29.253 { 00:33:29.253 "name": "BaseBdev4", 00:33:29.253 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:29.253 "is_configured": true, 00:33:29.253 "data_offset": 2048, 00:33:29.253 "data_size": 63488 00:33:29.253 } 00:33:29.253 ] 00:33:29.253 }' 00:33:29.253 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:29.253 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:29.253 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:29.253 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:29.253 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:29.253 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.511 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:33:29.511 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:29.769 [2024-07-12 09:00:04.783100] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.769 09:00:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.027 09:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:30.027 "name": "raid_bdev1", 00:33:30.027 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:30.027 "strip_size_kb": 0, 00:33:30.027 "state": "online", 00:33:30.027 "raid_level": "raid1", 00:33:30.027 "superblock": true, 00:33:30.027 "num_base_bdevs": 4, 00:33:30.027 "num_base_bdevs_discovered": 2, 00:33:30.027 "num_base_bdevs_operational": 2, 00:33:30.027 "base_bdevs_list": [ 00:33:30.027 { 00:33:30.027 "name": null, 00:33:30.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.027 "is_configured": false, 00:33:30.027 "data_offset": 2048, 00:33:30.027 "data_size": 63488 00:33:30.027 }, 00:33:30.027 { 00:33:30.027 "name": null, 00:33:30.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.027 "is_configured": false, 00:33:30.027 "data_offset": 2048, 00:33:30.027 "data_size": 63488 00:33:30.027 }, 00:33:30.027 { 00:33:30.027 "name": "BaseBdev3", 00:33:30.027 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:30.027 "is_configured": true, 00:33:30.027 "data_offset": 2048, 00:33:30.027 "data_size": 63488 00:33:30.027 }, 00:33:30.027 { 00:33:30.027 "name": "BaseBdev4", 00:33:30.027 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:30.027 "is_configured": true, 00:33:30.027 "data_offset": 2048, 00:33:30.027 "data_size": 63488 00:33:30.027 } 00:33:30.027 ] 00:33:30.027 }' 00:33:30.027 09:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:30.027 09:00:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:30.591 09:00:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:30.849 [2024-07-12 09:00:06.019522] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:30.849 [2024-07-12 09:00:06.020043] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:30.849 [2024-07-12 09:00:06.020168] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:30.849 [2024-07-12 09:00:06.020281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:30.849 [2024-07-12 09:00:06.031363] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a560 00:33:30.849 [2024-07-12 09:00:06.033557] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:30.849 09:00:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:32.224 "name": "raid_bdev1", 00:33:32.224 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:32.224 "strip_size_kb": 0, 00:33:32.224 "state": "online", 00:33:32.224 "raid_level": "raid1", 00:33:32.224 "superblock": true, 00:33:32.224 "num_base_bdevs": 4, 00:33:32.224 "num_base_bdevs_discovered": 3, 00:33:32.224 "num_base_bdevs_operational": 3, 00:33:32.224 "process": { 00:33:32.224 "type": "rebuild", 00:33:32.224 "target": "spare", 00:33:32.224 "progress": { 00:33:32.224 "blocks": 24576, 00:33:32.224 "percent": 38 00:33:32.224 } 00:33:32.224 }, 00:33:32.224 "base_bdevs_list": [ 00:33:32.224 { 00:33:32.224 "name": "spare", 00:33:32.224 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:32.224 "is_configured": true, 00:33:32.224 "data_offset": 2048, 00:33:32.224 "data_size": 63488 00:33:32.224 }, 00:33:32.224 { 00:33:32.224 "name": null, 00:33:32.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.224 "is_configured": false, 00:33:32.224 "data_offset": 2048, 00:33:32.224 "data_size": 63488 00:33:32.224 }, 00:33:32.224 { 00:33:32.224 "name": "BaseBdev3", 00:33:32.224 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:32.224 "is_configured": true, 00:33:32.224 "data_offset": 2048, 00:33:32.224 "data_size": 63488 00:33:32.224 }, 00:33:32.224 { 00:33:32.224 "name": "BaseBdev4", 00:33:32.224 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:32.224 "is_configured": true, 00:33:32.224 "data_offset": 2048, 00:33:32.224 "data_size": 63488 00:33:32.224 } 00:33:32.224 ] 00:33:32.224 }' 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:32.224 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:32.482 [2024-07-12 09:00:07.664006] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:32.740 [2024-07-12 09:00:07.745139] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:32.740 [2024-07-12 09:00:07.745563] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:32.740 [2024-07-12 09:00:07.745723] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:32.740 [2024-07-12 09:00:07.745766] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.740 09:00:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.998 09:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.998 "name": "raid_bdev1", 00:33:32.998 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:32.998 "strip_size_kb": 0, 00:33:32.998 "state": "online", 00:33:32.998 "raid_level": "raid1", 00:33:32.998 "superblock": true, 00:33:32.998 "num_base_bdevs": 4, 00:33:32.998 "num_base_bdevs_discovered": 2, 00:33:32.998 "num_base_bdevs_operational": 2, 00:33:32.998 "base_bdevs_list": [ 00:33:32.998 { 00:33:32.998 "name": null, 00:33:32.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.998 "is_configured": false, 00:33:32.998 "data_offset": 2048, 00:33:32.998 "data_size": 63488 00:33:32.998 }, 00:33:32.998 { 00:33:32.998 "name": null, 00:33:32.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.998 "is_configured": false, 00:33:32.998 "data_offset": 2048, 00:33:32.998 "data_size": 63488 00:33:32.998 }, 00:33:32.998 { 00:33:32.998 "name": "BaseBdev3", 00:33:32.998 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:32.998 "is_configured": true, 00:33:32.998 "data_offset": 2048, 00:33:32.998 "data_size": 63488 00:33:32.998 }, 00:33:32.998 { 00:33:32.998 "name": "BaseBdev4", 00:33:32.998 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:32.998 "is_configured": true, 00:33:32.998 "data_offset": 2048, 00:33:32.998 "data_size": 63488 00:33:32.998 } 00:33:32.998 ] 00:33:32.998 }' 00:33:32.998 09:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.998 09:00:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:33.931 09:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:33.931 [2024-07-12 09:00:08.975300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:33.931 [2024-07-12 09:00:08.975685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.931 [2024-07-12 09:00:08.975847] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:33:33.931 [2024-07-12 09:00:08.975961] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.931 [2024-07-12 09:00:08.976643] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.931 [2024-07-12 09:00:08.976830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:33.931 [2024-07-12 09:00:08.977086] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:33.931 [2024-07-12 09:00:08.977193] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:33.931 [2024-07-12 09:00:08.977286] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:33.931 [2024-07-12 09:00:08.977372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:33.931 [2024-07-12 09:00:08.988795] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a8a0 00:33:33.931 spare 00:33:33.931 [2024-07-12 09:00:08.991068] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:33.931 09:00:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:33:34.865 09:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:34.865 09:00:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:34.865 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:34.865 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:34.865 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:34.865 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.865 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.123 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:35.123 "name": "raid_bdev1", 00:33:35.123 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:35.123 "strip_size_kb": 0, 00:33:35.123 "state": "online", 00:33:35.123 "raid_level": "raid1", 00:33:35.123 "superblock": true, 00:33:35.123 "num_base_bdevs": 4, 00:33:35.123 "num_base_bdevs_discovered": 3, 00:33:35.123 "num_base_bdevs_operational": 3, 00:33:35.123 "process": { 00:33:35.123 "type": "rebuild", 00:33:35.123 "target": "spare", 00:33:35.123 "progress": { 00:33:35.123 "blocks": 24576, 00:33:35.123 "percent": 38 00:33:35.123 } 00:33:35.123 }, 00:33:35.123 "base_bdevs_list": [ 00:33:35.123 { 00:33:35.123 "name": "spare", 00:33:35.123 "uuid": "3f3b2476-133c-56c3-ad0d-8d5e2a36ff23", 00:33:35.123 "is_configured": true, 00:33:35.123 "data_offset": 2048, 00:33:35.123 "data_size": 63488 00:33:35.123 }, 00:33:35.123 { 00:33:35.123 "name": null, 00:33:35.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.123 "is_configured": false, 00:33:35.123 "data_offset": 2048, 00:33:35.123 "data_size": 63488 00:33:35.123 }, 00:33:35.123 { 00:33:35.123 "name": "BaseBdev3", 00:33:35.123 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:35.123 "is_configured": true, 00:33:35.123 "data_offset": 2048, 00:33:35.123 "data_size": 63488 00:33:35.123 }, 00:33:35.123 { 00:33:35.123 "name": "BaseBdev4", 00:33:35.123 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:35.123 "is_configured": true, 00:33:35.123 "data_offset": 2048, 00:33:35.123 "data_size": 63488 00:33:35.123 } 00:33:35.123 ] 00:33:35.123 }' 00:33:35.123 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:35.381 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:35.381 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:35.381 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:35.381 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:35.639 [2024-07-12 09:00:10.617584] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:35.639 [2024-07-12 09:00:10.702724] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:35.639 [2024-07-12 09:00:10.703114] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:35.639 [2024-07-12 09:00:10.703174] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:35.639 [2024-07-12 09:00:10.703309] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.639 09:00:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.897 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:35.897 "name": "raid_bdev1", 00:33:35.897 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:35.897 "strip_size_kb": 0, 00:33:35.897 "state": "online", 00:33:35.897 "raid_level": "raid1", 00:33:35.897 "superblock": true, 00:33:35.897 "num_base_bdevs": 4, 00:33:35.897 "num_base_bdevs_discovered": 2, 00:33:35.897 "num_base_bdevs_operational": 2, 00:33:35.897 "base_bdevs_list": [ 00:33:35.897 { 00:33:35.897 "name": null, 00:33:35.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.897 "is_configured": false, 00:33:35.897 "data_offset": 2048, 00:33:35.897 "data_size": 63488 00:33:35.897 }, 00:33:35.897 { 00:33:35.897 "name": null, 00:33:35.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.897 "is_configured": false, 00:33:35.897 "data_offset": 2048, 00:33:35.897 "data_size": 63488 00:33:35.897 }, 00:33:35.897 { 00:33:35.897 "name": "BaseBdev3", 00:33:35.897 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:35.897 "is_configured": true, 00:33:35.897 "data_offset": 2048, 00:33:35.897 "data_size": 63488 00:33:35.897 }, 00:33:35.897 { 00:33:35.897 "name": "BaseBdev4", 00:33:35.897 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:35.897 "is_configured": true, 00:33:35.897 "data_offset": 2048, 00:33:35.897 "data_size": 63488 00:33:35.897 } 00:33:35.897 ] 00:33:35.897 }' 00:33:35.897 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:35.897 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:36.833 "name": "raid_bdev1", 00:33:36.833 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:36.833 "strip_size_kb": 0, 00:33:36.833 "state": "online", 00:33:36.833 "raid_level": "raid1", 00:33:36.833 "superblock": true, 00:33:36.833 "num_base_bdevs": 4, 00:33:36.833 "num_base_bdevs_discovered": 2, 00:33:36.833 "num_base_bdevs_operational": 2, 00:33:36.833 "base_bdevs_list": [ 00:33:36.833 { 00:33:36.833 "name": null, 00:33:36.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.833 "is_configured": false, 00:33:36.833 "data_offset": 2048, 00:33:36.833 "data_size": 63488 00:33:36.833 }, 00:33:36.833 { 00:33:36.833 "name": null, 00:33:36.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.833 "is_configured": false, 00:33:36.833 "data_offset": 2048, 00:33:36.833 "data_size": 63488 00:33:36.833 }, 00:33:36.833 { 00:33:36.833 "name": "BaseBdev3", 00:33:36.833 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:36.833 "is_configured": true, 00:33:36.833 "data_offset": 2048, 00:33:36.833 "data_size": 63488 00:33:36.833 }, 00:33:36.833 { 00:33:36.833 "name": "BaseBdev4", 00:33:36.833 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:36.833 "is_configured": true, 00:33:36.833 "data_offset": 2048, 00:33:36.833 "data_size": 63488 00:33:36.833 } 00:33:36.833 ] 00:33:36.833 }' 00:33:36.833 09:00:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:37.091 09:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:37.091 09:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:37.091 09:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:37.091 09:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:37.349 09:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:37.608 [2024-07-12 09:00:12.578701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:37.608 [2024-07-12 09:00:12.579076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.608 [2024-07-12 09:00:12.579267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:33:37.608 [2024-07-12 09:00:12.579387] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.608 [2024-07-12 09:00:12.579960] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.608 [2024-07-12 09:00:12.580206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:37.608 [2024-07-12 09:00:12.580456] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:37.608 [2024-07-12 09:00:12.580572] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:37.608 [2024-07-12 09:00:12.580679] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:37.608 BaseBdev1 00:33:37.608 09:00:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.542 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.799 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:38.799 "name": "raid_bdev1", 00:33:38.799 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:38.799 "strip_size_kb": 0, 00:33:38.799 "state": "online", 00:33:38.799 "raid_level": "raid1", 00:33:38.799 "superblock": true, 00:33:38.799 "num_base_bdevs": 4, 00:33:38.799 "num_base_bdevs_discovered": 2, 00:33:38.799 "num_base_bdevs_operational": 2, 00:33:38.799 "base_bdevs_list": [ 00:33:38.799 { 00:33:38.799 "name": null, 00:33:38.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.799 "is_configured": false, 00:33:38.799 "data_offset": 2048, 00:33:38.799 "data_size": 63488 00:33:38.799 }, 00:33:38.799 { 00:33:38.799 "name": null, 00:33:38.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.799 "is_configured": false, 00:33:38.799 "data_offset": 2048, 00:33:38.799 "data_size": 63488 00:33:38.799 }, 00:33:38.799 { 00:33:38.799 "name": "BaseBdev3", 00:33:38.799 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:38.799 "is_configured": true, 00:33:38.799 "data_offset": 2048, 00:33:38.799 "data_size": 63488 00:33:38.799 }, 00:33:38.799 { 00:33:38.799 "name": "BaseBdev4", 00:33:38.799 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:38.799 "is_configured": true, 00:33:38.799 "data_offset": 2048, 00:33:38.799 "data_size": 63488 00:33:38.799 } 00:33:38.799 ] 00:33:38.799 }' 00:33:38.799 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:38.800 09:00:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:39.732 "name": "raid_bdev1", 00:33:39.732 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:39.732 "strip_size_kb": 0, 00:33:39.732 "state": "online", 00:33:39.732 "raid_level": "raid1", 00:33:39.732 "superblock": true, 00:33:39.732 "num_base_bdevs": 4, 00:33:39.732 "num_base_bdevs_discovered": 2, 00:33:39.732 "num_base_bdevs_operational": 2, 00:33:39.732 "base_bdevs_list": [ 00:33:39.732 { 00:33:39.732 "name": null, 00:33:39.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.732 "is_configured": false, 00:33:39.732 "data_offset": 2048, 00:33:39.732 "data_size": 63488 00:33:39.732 }, 00:33:39.732 { 00:33:39.732 "name": null, 00:33:39.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.732 "is_configured": false, 00:33:39.732 "data_offset": 2048, 00:33:39.732 "data_size": 63488 00:33:39.732 }, 00:33:39.732 { 00:33:39.732 "name": "BaseBdev3", 00:33:39.732 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:39.732 "is_configured": true, 00:33:39.732 "data_offset": 2048, 00:33:39.732 "data_size": 63488 00:33:39.732 }, 00:33:39.732 { 00:33:39.732 "name": "BaseBdev4", 00:33:39.732 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:39.732 "is_configured": true, 00:33:39.732 "data_offset": 2048, 00:33:39.732 "data_size": 63488 00:33:39.732 } 00:33:39.732 ] 00:33:39.732 }' 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:39.732 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:39.991 09:00:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:39.991 [2024-07-12 09:00:15.179640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:39.991 [2024-07-12 09:00:15.180087] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:39.991 [2024-07-12 09:00:15.180212] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:39.991 request: 00:33:39.991 { 00:33:39.991 "base_bdev": "BaseBdev1", 00:33:39.991 "raid_bdev": "raid_bdev1", 00:33:39.991 "method": "bdev_raid_add_base_bdev", 00:33:39.991 "req_id": 1 00:33:39.991 } 00:33:39.991 Got JSON-RPC error response 00:33:39.991 response: 00:33:39.991 { 00:33:39.991 "code": -22, 00:33:39.991 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:39.991 } 00:33:40.249 09:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:33:40.249 09:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:40.249 09:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:40.249 09:00:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:40.249 09:00:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.184 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.441 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:41.441 "name": "raid_bdev1", 00:33:41.441 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:41.441 "strip_size_kb": 0, 00:33:41.442 "state": "online", 00:33:41.442 "raid_level": "raid1", 00:33:41.442 "superblock": true, 00:33:41.442 "num_base_bdevs": 4, 00:33:41.442 "num_base_bdevs_discovered": 2, 00:33:41.442 "num_base_bdevs_operational": 2, 00:33:41.442 "base_bdevs_list": [ 00:33:41.442 { 00:33:41.442 "name": null, 00:33:41.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.442 "is_configured": false, 00:33:41.442 "data_offset": 2048, 00:33:41.442 "data_size": 63488 00:33:41.442 }, 00:33:41.442 { 00:33:41.442 "name": null, 00:33:41.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.442 "is_configured": false, 00:33:41.442 "data_offset": 2048, 00:33:41.442 "data_size": 63488 00:33:41.442 }, 00:33:41.442 { 00:33:41.442 "name": "BaseBdev3", 00:33:41.442 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:41.442 "is_configured": true, 00:33:41.442 "data_offset": 2048, 00:33:41.442 "data_size": 63488 00:33:41.442 }, 00:33:41.442 { 00:33:41.442 "name": "BaseBdev4", 00:33:41.442 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:41.442 "is_configured": true, 00:33:41.442 "data_offset": 2048, 00:33:41.442 "data_size": 63488 00:33:41.442 } 00:33:41.442 ] 00:33:41.442 }' 00:33:41.442 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:41.442 09:00:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:42.378 "name": "raid_bdev1", 00:33:42.378 "uuid": "f9c905b6-5201-40ea-82c4-b390c16c44b2", 00:33:42.378 "strip_size_kb": 0, 00:33:42.378 "state": "online", 00:33:42.378 "raid_level": "raid1", 00:33:42.378 "superblock": true, 00:33:42.378 "num_base_bdevs": 4, 00:33:42.378 "num_base_bdevs_discovered": 2, 00:33:42.378 "num_base_bdevs_operational": 2, 00:33:42.378 "base_bdevs_list": [ 00:33:42.378 { 00:33:42.378 "name": null, 00:33:42.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.378 "is_configured": false, 00:33:42.378 "data_offset": 2048, 00:33:42.378 "data_size": 63488 00:33:42.378 }, 00:33:42.378 { 00:33:42.378 "name": null, 00:33:42.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.378 "is_configured": false, 00:33:42.378 "data_offset": 2048, 00:33:42.378 "data_size": 63488 00:33:42.378 }, 00:33:42.378 { 00:33:42.378 "name": "BaseBdev3", 00:33:42.378 "uuid": "df31d2f2-53ab-5419-9b6b-83b59c89b3f2", 00:33:42.378 "is_configured": true, 00:33:42.378 "data_offset": 2048, 00:33:42.378 "data_size": 63488 00:33:42.378 }, 00:33:42.378 { 00:33:42.378 "name": "BaseBdev4", 00:33:42.378 "uuid": "bba601f8-1b0b-5416-acef-fcb5716a2f0a", 00:33:42.378 "is_configured": true, 00:33:42.378 "data_offset": 2048, 00:33:42.378 "data_size": 63488 00:33:42.378 } 00:33:42.378 ] 00:33:42.378 }' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 151558 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 151558 ']' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 151558 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151558 00:33:42.378 killing process with pid 151558 00:33:42.378 Received shutdown signal, test time was about 29.687480 seconds 00:33:42.378 00:33:42.378 Latency(us) 00:33:42.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.378 =================================================================================================================== 00:33:42.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151558' 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 151558 00:33:42.378 09:00:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 151558 00:33:42.378 [2024-07-12 09:00:17.564536] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:42.378 [2024-07-12 09:00:17.564733] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:42.378 [2024-07-12 09:00:17.564853] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:42.378 [2024-07-12 09:00:17.564922] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:33:42.945 [2024-07-12 09:00:17.891437] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:43.876 ************************************ 00:33:43.876 END TEST raid_rebuild_test_sb_io 00:33:43.876 ************************************ 00:33:43.876 09:00:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:33:43.876 00:33:43.876 real 0m36.632s 00:33:43.876 user 0m59.828s 00:33:43.877 sys 0m3.708s 00:33:43.877 09:00:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:43.877 09:00:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:43.877 09:00:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:43.877 09:00:19 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:33:43.877 09:00:19 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:33:43.877 09:00:19 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:33:43.877 09:00:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:43.877 09:00:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.877 09:00:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:43.877 ************************************ 00:33:43.877 START TEST raid5f_state_function_test 00:33:43.877 ************************************ 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 false 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:43.877 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=152557 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 152557' 00:33:44.135 Process raid pid: 152557 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 152557 /var/tmp/spdk-raid.sock 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 152557 ']' 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:44.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:44.135 09:00:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.135 [2024-07-12 09:00:19.142571] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:33:44.135 [2024-07-12 09:00:19.143109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.135 [2024-07-12 09:00:19.313871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.393 [2024-07-12 09:00:19.515727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.651 [2024-07-12 09:00:19.702325] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:44.909 09:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.909 09:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:33:44.909 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:45.167 [2024-07-12 09:00:20.287295] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:45.167 [2024-07-12 09:00:20.287648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:45.167 [2024-07-12 09:00:20.287765] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:45.167 [2024-07-12 09:00:20.287831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:45.167 [2024-07-12 09:00:20.287922] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:45.167 [2024-07-12 09:00:20.288068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:45.167 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:45.167 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:45.167 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.168 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:45.426 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:45.426 "name": "Existed_Raid", 00:33:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.426 "strip_size_kb": 64, 00:33:45.426 "state": "configuring", 00:33:45.426 "raid_level": "raid5f", 00:33:45.426 "superblock": false, 00:33:45.426 "num_base_bdevs": 3, 00:33:45.426 "num_base_bdevs_discovered": 0, 00:33:45.426 "num_base_bdevs_operational": 3, 00:33:45.426 "base_bdevs_list": [ 00:33:45.426 { 00:33:45.426 "name": "BaseBdev1", 00:33:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.426 "is_configured": false, 00:33:45.426 "data_offset": 0, 00:33:45.426 "data_size": 0 00:33:45.426 }, 00:33:45.426 { 00:33:45.426 "name": "BaseBdev2", 00:33:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.426 "is_configured": false, 00:33:45.426 "data_offset": 0, 00:33:45.426 "data_size": 0 00:33:45.426 }, 00:33:45.426 { 00:33:45.426 "name": "BaseBdev3", 00:33:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.426 "is_configured": false, 00:33:45.426 "data_offset": 0, 00:33:45.426 "data_size": 0 00:33:45.426 } 00:33:45.426 ] 00:33:45.426 }' 00:33:45.426 09:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:45.426 09:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.370 09:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:46.370 [2024-07-12 09:00:21.495390] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:46.370 [2024-07-12 09:00:21.495621] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:33:46.370 09:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:46.628 [2024-07-12 09:00:21.771457] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:46.628 [2024-07-12 09:00:21.771771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:46.628 [2024-07-12 09:00:21.771892] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:46.628 [2024-07-12 09:00:21.771948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:46.628 [2024-07-12 09:00:21.772040] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:46.628 [2024-07-12 09:00:21.772100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:46.628 09:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:46.886 [2024-07-12 09:00:22.063944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:46.886 BaseBdev1 00:33:46.886 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:46.886 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:33:46.886 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:46.886 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:46.886 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:46.886 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:46.886 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:47.144 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:47.401 [ 00:33:47.402 { 00:33:47.402 "name": "BaseBdev1", 00:33:47.402 "aliases": [ 00:33:47.402 "7e45d6f8-c510-4865-8228-650886bf5bd2" 00:33:47.402 ], 00:33:47.402 "product_name": "Malloc disk", 00:33:47.402 "block_size": 512, 00:33:47.402 "num_blocks": 65536, 00:33:47.402 "uuid": "7e45d6f8-c510-4865-8228-650886bf5bd2", 00:33:47.402 "assigned_rate_limits": { 00:33:47.402 "rw_ios_per_sec": 0, 00:33:47.402 "rw_mbytes_per_sec": 0, 00:33:47.402 "r_mbytes_per_sec": 0, 00:33:47.402 "w_mbytes_per_sec": 0 00:33:47.402 }, 00:33:47.402 "claimed": true, 00:33:47.402 "claim_type": "exclusive_write", 00:33:47.402 "zoned": false, 00:33:47.402 "supported_io_types": { 00:33:47.402 "read": true, 00:33:47.402 "write": true, 00:33:47.402 "unmap": true, 00:33:47.402 "flush": true, 00:33:47.402 "reset": true, 00:33:47.402 "nvme_admin": false, 00:33:47.402 "nvme_io": false, 00:33:47.402 "nvme_io_md": false, 00:33:47.402 "write_zeroes": true, 00:33:47.402 "zcopy": true, 00:33:47.402 "get_zone_info": false, 00:33:47.402 "zone_management": false, 00:33:47.402 "zone_append": false, 00:33:47.402 "compare": false, 00:33:47.402 "compare_and_write": false, 00:33:47.402 "abort": true, 00:33:47.402 "seek_hole": false, 00:33:47.402 "seek_data": false, 00:33:47.402 "copy": true, 00:33:47.402 "nvme_iov_md": false 00:33:47.402 }, 00:33:47.402 "memory_domains": [ 00:33:47.402 { 00:33:47.402 "dma_device_id": "system", 00:33:47.402 "dma_device_type": 1 00:33:47.402 }, 00:33:47.402 { 00:33:47.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.402 "dma_device_type": 2 00:33:47.402 } 00:33:47.402 ], 00:33:47.402 "driver_specific": {} 00:33:47.402 } 00:33:47.402 ] 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.402 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:47.660 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:47.660 "name": "Existed_Raid", 00:33:47.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.660 "strip_size_kb": 64, 00:33:47.660 "state": "configuring", 00:33:47.660 "raid_level": "raid5f", 00:33:47.660 "superblock": false, 00:33:47.660 "num_base_bdevs": 3, 00:33:47.660 "num_base_bdevs_discovered": 1, 00:33:47.660 "num_base_bdevs_operational": 3, 00:33:47.660 "base_bdevs_list": [ 00:33:47.660 { 00:33:47.660 "name": "BaseBdev1", 00:33:47.660 "uuid": "7e45d6f8-c510-4865-8228-650886bf5bd2", 00:33:47.660 "is_configured": true, 00:33:47.660 "data_offset": 0, 00:33:47.660 "data_size": 65536 00:33:47.660 }, 00:33:47.660 { 00:33:47.660 "name": "BaseBdev2", 00:33:47.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.660 "is_configured": false, 00:33:47.660 "data_offset": 0, 00:33:47.660 "data_size": 0 00:33:47.660 }, 00:33:47.660 { 00:33:47.660 "name": "BaseBdev3", 00:33:47.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.660 "is_configured": false, 00:33:47.660 "data_offset": 0, 00:33:47.660 "data_size": 0 00:33:47.660 } 00:33:47.660 ] 00:33:47.660 }' 00:33:47.660 09:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:47.660 09:00:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.669 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:48.669 [2024-07-12 09:00:23.728381] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:48.669 [2024-07-12 09:00:23.728734] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:33:48.669 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:33:48.928 [2024-07-12 09:00:23.936465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:48.928 [2024-07-12 09:00:23.938730] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:48.928 [2024-07-12 09:00:23.938970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:48.928 [2024-07-12 09:00:23.939102] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:48.928 [2024-07-12 09:00:23.939234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.928 09:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:49.187 09:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:49.187 "name": "Existed_Raid", 00:33:49.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.187 "strip_size_kb": 64, 00:33:49.187 "state": "configuring", 00:33:49.187 "raid_level": "raid5f", 00:33:49.187 "superblock": false, 00:33:49.187 "num_base_bdevs": 3, 00:33:49.187 "num_base_bdevs_discovered": 1, 00:33:49.187 "num_base_bdevs_operational": 3, 00:33:49.187 "base_bdevs_list": [ 00:33:49.187 { 00:33:49.187 "name": "BaseBdev1", 00:33:49.187 "uuid": "7e45d6f8-c510-4865-8228-650886bf5bd2", 00:33:49.187 "is_configured": true, 00:33:49.187 "data_offset": 0, 00:33:49.187 "data_size": 65536 00:33:49.187 }, 00:33:49.187 { 00:33:49.187 "name": "BaseBdev2", 00:33:49.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.187 "is_configured": false, 00:33:49.187 "data_offset": 0, 00:33:49.187 "data_size": 0 00:33:49.187 }, 00:33:49.187 { 00:33:49.187 "name": "BaseBdev3", 00:33:49.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.187 "is_configured": false, 00:33:49.187 "data_offset": 0, 00:33:49.187 "data_size": 0 00:33:49.187 } 00:33:49.187 ] 00:33:49.187 }' 00:33:49.187 09:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:49.187 09:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.754 09:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:50.012 [2024-07-12 09:00:25.181706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:50.012 BaseBdev2 00:33:50.012 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:50.012 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:50.012 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:50.012 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:50.012 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:50.012 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:50.012 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:50.578 [ 00:33:50.578 { 00:33:50.578 "name": "BaseBdev2", 00:33:50.578 "aliases": [ 00:33:50.578 "c591c942-d691-4594-9fb3-4c162f1fb768" 00:33:50.578 ], 00:33:50.578 "product_name": "Malloc disk", 00:33:50.578 "block_size": 512, 00:33:50.578 "num_blocks": 65536, 00:33:50.578 "uuid": "c591c942-d691-4594-9fb3-4c162f1fb768", 00:33:50.578 "assigned_rate_limits": { 00:33:50.578 "rw_ios_per_sec": 0, 00:33:50.578 "rw_mbytes_per_sec": 0, 00:33:50.578 "r_mbytes_per_sec": 0, 00:33:50.578 "w_mbytes_per_sec": 0 00:33:50.578 }, 00:33:50.578 "claimed": true, 00:33:50.578 "claim_type": "exclusive_write", 00:33:50.578 "zoned": false, 00:33:50.578 "supported_io_types": { 00:33:50.578 "read": true, 00:33:50.578 "write": true, 00:33:50.578 "unmap": true, 00:33:50.578 "flush": true, 00:33:50.578 "reset": true, 00:33:50.578 "nvme_admin": false, 00:33:50.578 "nvme_io": false, 00:33:50.578 "nvme_io_md": false, 00:33:50.578 "write_zeroes": true, 00:33:50.578 "zcopy": true, 00:33:50.578 "get_zone_info": false, 00:33:50.578 "zone_management": false, 00:33:50.578 "zone_append": false, 00:33:50.578 "compare": false, 00:33:50.578 "compare_and_write": false, 00:33:50.578 "abort": true, 00:33:50.578 "seek_hole": false, 00:33:50.578 "seek_data": false, 00:33:50.578 "copy": true, 00:33:50.578 "nvme_iov_md": false 00:33:50.578 }, 00:33:50.578 "memory_domains": [ 00:33:50.578 { 00:33:50.578 "dma_device_id": "system", 00:33:50.578 "dma_device_type": 1 00:33:50.578 }, 00:33:50.578 { 00:33:50.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.578 "dma_device_type": 2 00:33:50.578 } 00:33:50.578 ], 00:33:50.578 "driver_specific": {} 00:33:50.578 } 00:33:50.578 ] 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.578 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:50.837 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:50.837 "name": "Existed_Raid", 00:33:50.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.837 "strip_size_kb": 64, 00:33:50.837 "state": "configuring", 00:33:50.837 "raid_level": "raid5f", 00:33:50.837 "superblock": false, 00:33:50.837 "num_base_bdevs": 3, 00:33:50.837 "num_base_bdevs_discovered": 2, 00:33:50.837 "num_base_bdevs_operational": 3, 00:33:50.837 "base_bdevs_list": [ 00:33:50.837 { 00:33:50.837 "name": "BaseBdev1", 00:33:50.837 "uuid": "7e45d6f8-c510-4865-8228-650886bf5bd2", 00:33:50.837 "is_configured": true, 00:33:50.837 "data_offset": 0, 00:33:50.837 "data_size": 65536 00:33:50.837 }, 00:33:50.837 { 00:33:50.837 "name": "BaseBdev2", 00:33:50.837 "uuid": "c591c942-d691-4594-9fb3-4c162f1fb768", 00:33:50.837 "is_configured": true, 00:33:50.837 "data_offset": 0, 00:33:50.837 "data_size": 65536 00:33:50.837 }, 00:33:50.837 { 00:33:50.837 "name": "BaseBdev3", 00:33:50.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.837 "is_configured": false, 00:33:50.837 "data_offset": 0, 00:33:50.837 "data_size": 0 00:33:50.837 } 00:33:50.837 ] 00:33:50.837 }' 00:33:50.837 09:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:50.837 09:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.770 09:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:51.770 [2024-07-12 09:00:26.965197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:51.770 [2024-07-12 09:00:26.965512] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:33:51.770 [2024-07-12 09:00:26.965560] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:33:51.770 [2024-07-12 09:00:26.965828] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:33:52.028 [2024-07-12 09:00:26.971148] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:33:52.028 [2024-07-12 09:00:26.971290] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:33:52.028 [2024-07-12 09:00:26.971693] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:52.028 BaseBdev3 00:33:52.028 09:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:33:52.028 09:00:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:33:52.028 09:00:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:52.028 09:00:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:52.028 09:00:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:52.028 09:00:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:52.028 09:00:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:52.028 09:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:52.285 [ 00:33:52.285 { 00:33:52.285 "name": "BaseBdev3", 00:33:52.285 "aliases": [ 00:33:52.285 "62b72137-38d7-4ee9-8c1c-953c3803764e" 00:33:52.285 ], 00:33:52.285 "product_name": "Malloc disk", 00:33:52.285 "block_size": 512, 00:33:52.285 "num_blocks": 65536, 00:33:52.285 "uuid": "62b72137-38d7-4ee9-8c1c-953c3803764e", 00:33:52.285 "assigned_rate_limits": { 00:33:52.285 "rw_ios_per_sec": 0, 00:33:52.285 "rw_mbytes_per_sec": 0, 00:33:52.285 "r_mbytes_per_sec": 0, 00:33:52.285 "w_mbytes_per_sec": 0 00:33:52.285 }, 00:33:52.285 "claimed": true, 00:33:52.285 "claim_type": "exclusive_write", 00:33:52.285 "zoned": false, 00:33:52.285 "supported_io_types": { 00:33:52.285 "read": true, 00:33:52.285 "write": true, 00:33:52.285 "unmap": true, 00:33:52.285 "flush": true, 00:33:52.285 "reset": true, 00:33:52.285 "nvme_admin": false, 00:33:52.285 "nvme_io": false, 00:33:52.285 "nvme_io_md": false, 00:33:52.285 "write_zeroes": true, 00:33:52.285 "zcopy": true, 00:33:52.285 "get_zone_info": false, 00:33:52.285 "zone_management": false, 00:33:52.285 "zone_append": false, 00:33:52.285 "compare": false, 00:33:52.285 "compare_and_write": false, 00:33:52.285 "abort": true, 00:33:52.285 "seek_hole": false, 00:33:52.285 "seek_data": false, 00:33:52.285 "copy": true, 00:33:52.285 "nvme_iov_md": false 00:33:52.285 }, 00:33:52.285 "memory_domains": [ 00:33:52.285 { 00:33:52.285 "dma_device_id": "system", 00:33:52.285 "dma_device_type": 1 00:33:52.285 }, 00:33:52.285 { 00:33:52.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.285 "dma_device_type": 2 00:33:52.285 } 00:33:52.285 ], 00:33:52.285 "driver_specific": {} 00:33:52.285 } 00:33:52.285 ] 00:33:52.285 09:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.543 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.801 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:52.801 "name": "Existed_Raid", 00:33:52.801 "uuid": "f248dd25-666b-4a66-8988-337ac8a5dad2", 00:33:52.801 "strip_size_kb": 64, 00:33:52.801 "state": "online", 00:33:52.801 "raid_level": "raid5f", 00:33:52.801 "superblock": false, 00:33:52.801 "num_base_bdevs": 3, 00:33:52.801 "num_base_bdevs_discovered": 3, 00:33:52.801 "num_base_bdevs_operational": 3, 00:33:52.801 "base_bdevs_list": [ 00:33:52.801 { 00:33:52.801 "name": "BaseBdev1", 00:33:52.801 "uuid": "7e45d6f8-c510-4865-8228-650886bf5bd2", 00:33:52.801 "is_configured": true, 00:33:52.801 "data_offset": 0, 00:33:52.801 "data_size": 65536 00:33:52.801 }, 00:33:52.801 { 00:33:52.801 "name": "BaseBdev2", 00:33:52.801 "uuid": "c591c942-d691-4594-9fb3-4c162f1fb768", 00:33:52.801 "is_configured": true, 00:33:52.801 "data_offset": 0, 00:33:52.801 "data_size": 65536 00:33:52.801 }, 00:33:52.801 { 00:33:52.801 "name": "BaseBdev3", 00:33:52.801 "uuid": "62b72137-38d7-4ee9-8c1c-953c3803764e", 00:33:52.801 "is_configured": true, 00:33:52.801 "data_offset": 0, 00:33:52.801 "data_size": 65536 00:33:52.801 } 00:33:52.801 ] 00:33:52.801 }' 00:33:52.801 09:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:52.802 09:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:53.367 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:53.626 [2024-07-12 09:00:28.645646] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:53.626 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:53.626 "name": "Existed_Raid", 00:33:53.626 "aliases": [ 00:33:53.626 "f248dd25-666b-4a66-8988-337ac8a5dad2" 00:33:53.626 ], 00:33:53.626 "product_name": "Raid Volume", 00:33:53.626 "block_size": 512, 00:33:53.626 "num_blocks": 131072, 00:33:53.626 "uuid": "f248dd25-666b-4a66-8988-337ac8a5dad2", 00:33:53.626 "assigned_rate_limits": { 00:33:53.626 "rw_ios_per_sec": 0, 00:33:53.626 "rw_mbytes_per_sec": 0, 00:33:53.626 "r_mbytes_per_sec": 0, 00:33:53.626 "w_mbytes_per_sec": 0 00:33:53.626 }, 00:33:53.626 "claimed": false, 00:33:53.626 "zoned": false, 00:33:53.626 "supported_io_types": { 00:33:53.626 "read": true, 00:33:53.626 "write": true, 00:33:53.626 "unmap": false, 00:33:53.626 "flush": false, 00:33:53.626 "reset": true, 00:33:53.626 "nvme_admin": false, 00:33:53.626 "nvme_io": false, 00:33:53.626 "nvme_io_md": false, 00:33:53.626 "write_zeroes": true, 00:33:53.626 "zcopy": false, 00:33:53.626 "get_zone_info": false, 00:33:53.626 "zone_management": false, 00:33:53.626 "zone_append": false, 00:33:53.626 "compare": false, 00:33:53.626 "compare_and_write": false, 00:33:53.626 "abort": false, 00:33:53.626 "seek_hole": false, 00:33:53.626 "seek_data": false, 00:33:53.626 "copy": false, 00:33:53.626 "nvme_iov_md": false 00:33:53.626 }, 00:33:53.626 "driver_specific": { 00:33:53.626 "raid": { 00:33:53.626 "uuid": "f248dd25-666b-4a66-8988-337ac8a5dad2", 00:33:53.626 "strip_size_kb": 64, 00:33:53.626 "state": "online", 00:33:53.626 "raid_level": "raid5f", 00:33:53.626 "superblock": false, 00:33:53.626 "num_base_bdevs": 3, 00:33:53.626 "num_base_bdevs_discovered": 3, 00:33:53.626 "num_base_bdevs_operational": 3, 00:33:53.626 "base_bdevs_list": [ 00:33:53.626 { 00:33:53.626 "name": "BaseBdev1", 00:33:53.626 "uuid": "7e45d6f8-c510-4865-8228-650886bf5bd2", 00:33:53.626 "is_configured": true, 00:33:53.626 "data_offset": 0, 00:33:53.626 "data_size": 65536 00:33:53.626 }, 00:33:53.626 { 00:33:53.626 "name": "BaseBdev2", 00:33:53.626 "uuid": "c591c942-d691-4594-9fb3-4c162f1fb768", 00:33:53.626 "is_configured": true, 00:33:53.626 "data_offset": 0, 00:33:53.626 "data_size": 65536 00:33:53.626 }, 00:33:53.626 { 00:33:53.626 "name": "BaseBdev3", 00:33:53.626 "uuid": "62b72137-38d7-4ee9-8c1c-953c3803764e", 00:33:53.626 "is_configured": true, 00:33:53.626 "data_offset": 0, 00:33:53.626 "data_size": 65536 00:33:53.626 } 00:33:53.626 ] 00:33:53.626 } 00:33:53.626 } 00:33:53.626 }' 00:33:53.626 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:53.626 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:53.626 BaseBdev2 00:33:53.626 BaseBdev3' 00:33:53.626 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:53.626 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:53.626 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:53.884 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:53.884 "name": "BaseBdev1", 00:33:53.884 "aliases": [ 00:33:53.884 "7e45d6f8-c510-4865-8228-650886bf5bd2" 00:33:53.884 ], 00:33:53.884 "product_name": "Malloc disk", 00:33:53.884 "block_size": 512, 00:33:53.884 "num_blocks": 65536, 00:33:53.884 "uuid": "7e45d6f8-c510-4865-8228-650886bf5bd2", 00:33:53.884 "assigned_rate_limits": { 00:33:53.884 "rw_ios_per_sec": 0, 00:33:53.884 "rw_mbytes_per_sec": 0, 00:33:53.884 "r_mbytes_per_sec": 0, 00:33:53.884 "w_mbytes_per_sec": 0 00:33:53.884 }, 00:33:53.884 "claimed": true, 00:33:53.884 "claim_type": "exclusive_write", 00:33:53.884 "zoned": false, 00:33:53.884 "supported_io_types": { 00:33:53.884 "read": true, 00:33:53.884 "write": true, 00:33:53.884 "unmap": true, 00:33:53.884 "flush": true, 00:33:53.884 "reset": true, 00:33:53.884 "nvme_admin": false, 00:33:53.884 "nvme_io": false, 00:33:53.884 "nvme_io_md": false, 00:33:53.884 "write_zeroes": true, 00:33:53.884 "zcopy": true, 00:33:53.884 "get_zone_info": false, 00:33:53.884 "zone_management": false, 00:33:53.884 "zone_append": false, 00:33:53.884 "compare": false, 00:33:53.884 "compare_and_write": false, 00:33:53.884 "abort": true, 00:33:53.884 "seek_hole": false, 00:33:53.884 "seek_data": false, 00:33:53.884 "copy": true, 00:33:53.884 "nvme_iov_md": false 00:33:53.884 }, 00:33:53.884 "memory_domains": [ 00:33:53.884 { 00:33:53.884 "dma_device_id": "system", 00:33:53.884 "dma_device_type": 1 00:33:53.884 }, 00:33:53.884 { 00:33:53.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.884 "dma_device_type": 2 00:33:53.884 } 00:33:53.884 ], 00:33:53.884 "driver_specific": {} 00:33:53.884 }' 00:33:53.884 09:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:53.884 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:54.142 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:54.399 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:54.399 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:54.399 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:54.399 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:54.399 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:54.657 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:54.657 "name": "BaseBdev2", 00:33:54.657 "aliases": [ 00:33:54.657 "c591c942-d691-4594-9fb3-4c162f1fb768" 00:33:54.657 ], 00:33:54.657 "product_name": "Malloc disk", 00:33:54.657 "block_size": 512, 00:33:54.657 "num_blocks": 65536, 00:33:54.657 "uuid": "c591c942-d691-4594-9fb3-4c162f1fb768", 00:33:54.657 "assigned_rate_limits": { 00:33:54.657 "rw_ios_per_sec": 0, 00:33:54.657 "rw_mbytes_per_sec": 0, 00:33:54.657 "r_mbytes_per_sec": 0, 00:33:54.657 "w_mbytes_per_sec": 0 00:33:54.657 }, 00:33:54.657 "claimed": true, 00:33:54.657 "claim_type": "exclusive_write", 00:33:54.657 "zoned": false, 00:33:54.657 "supported_io_types": { 00:33:54.657 "read": true, 00:33:54.657 "write": true, 00:33:54.657 "unmap": true, 00:33:54.657 "flush": true, 00:33:54.657 "reset": true, 00:33:54.657 "nvme_admin": false, 00:33:54.657 "nvme_io": false, 00:33:54.657 "nvme_io_md": false, 00:33:54.657 "write_zeroes": true, 00:33:54.657 "zcopy": true, 00:33:54.657 "get_zone_info": false, 00:33:54.657 "zone_management": false, 00:33:54.657 "zone_append": false, 00:33:54.657 "compare": false, 00:33:54.657 "compare_and_write": false, 00:33:54.657 "abort": true, 00:33:54.657 "seek_hole": false, 00:33:54.657 "seek_data": false, 00:33:54.657 "copy": true, 00:33:54.657 "nvme_iov_md": false 00:33:54.657 }, 00:33:54.657 "memory_domains": [ 00:33:54.657 { 00:33:54.657 "dma_device_id": "system", 00:33:54.657 "dma_device_type": 1 00:33:54.657 }, 00:33:54.657 { 00:33:54.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.657 "dma_device_type": 2 00:33:54.657 } 00:33:54.657 ], 00:33:54.657 "driver_specific": {} 00:33:54.657 }' 00:33:54.657 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:54.657 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:54.657 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:54.657 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:54.657 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:54.916 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:54.916 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:54.916 09:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:54.916 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:54.916 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:54.916 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:55.175 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:55.175 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:55.175 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:55.175 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:55.433 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:55.433 "name": "BaseBdev3", 00:33:55.433 "aliases": [ 00:33:55.433 "62b72137-38d7-4ee9-8c1c-953c3803764e" 00:33:55.433 ], 00:33:55.433 "product_name": "Malloc disk", 00:33:55.433 "block_size": 512, 00:33:55.433 "num_blocks": 65536, 00:33:55.433 "uuid": "62b72137-38d7-4ee9-8c1c-953c3803764e", 00:33:55.433 "assigned_rate_limits": { 00:33:55.433 "rw_ios_per_sec": 0, 00:33:55.433 "rw_mbytes_per_sec": 0, 00:33:55.433 "r_mbytes_per_sec": 0, 00:33:55.433 "w_mbytes_per_sec": 0 00:33:55.433 }, 00:33:55.433 "claimed": true, 00:33:55.433 "claim_type": "exclusive_write", 00:33:55.433 "zoned": false, 00:33:55.433 "supported_io_types": { 00:33:55.433 "read": true, 00:33:55.433 "write": true, 00:33:55.433 "unmap": true, 00:33:55.433 "flush": true, 00:33:55.433 "reset": true, 00:33:55.433 "nvme_admin": false, 00:33:55.433 "nvme_io": false, 00:33:55.433 "nvme_io_md": false, 00:33:55.433 "write_zeroes": true, 00:33:55.433 "zcopy": true, 00:33:55.433 "get_zone_info": false, 00:33:55.433 "zone_management": false, 00:33:55.433 "zone_append": false, 00:33:55.433 "compare": false, 00:33:55.433 "compare_and_write": false, 00:33:55.433 "abort": true, 00:33:55.433 "seek_hole": false, 00:33:55.433 "seek_data": false, 00:33:55.433 "copy": true, 00:33:55.433 "nvme_iov_md": false 00:33:55.433 }, 00:33:55.433 "memory_domains": [ 00:33:55.433 { 00:33:55.433 "dma_device_id": "system", 00:33:55.433 "dma_device_type": 1 00:33:55.433 }, 00:33:55.433 { 00:33:55.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:55.433 "dma_device_type": 2 00:33:55.433 } 00:33:55.433 ], 00:33:55.433 "driver_specific": {} 00:33:55.433 }' 00:33:55.433 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:55.433 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:55.433 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:55.433 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:55.433 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:55.692 09:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:55.950 [2024-07-12 09:00:31.110020] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.207 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.466 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:56.466 "name": "Existed_Raid", 00:33:56.466 "uuid": "f248dd25-666b-4a66-8988-337ac8a5dad2", 00:33:56.466 "strip_size_kb": 64, 00:33:56.466 "state": "online", 00:33:56.466 "raid_level": "raid5f", 00:33:56.466 "superblock": false, 00:33:56.466 "num_base_bdevs": 3, 00:33:56.466 "num_base_bdevs_discovered": 2, 00:33:56.466 "num_base_bdevs_operational": 2, 00:33:56.466 "base_bdevs_list": [ 00:33:56.466 { 00:33:56.466 "name": null, 00:33:56.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.466 "is_configured": false, 00:33:56.466 "data_offset": 0, 00:33:56.466 "data_size": 65536 00:33:56.466 }, 00:33:56.466 { 00:33:56.466 "name": "BaseBdev2", 00:33:56.466 "uuid": "c591c942-d691-4594-9fb3-4c162f1fb768", 00:33:56.466 "is_configured": true, 00:33:56.466 "data_offset": 0, 00:33:56.466 "data_size": 65536 00:33:56.466 }, 00:33:56.466 { 00:33:56.466 "name": "BaseBdev3", 00:33:56.466 "uuid": "62b72137-38d7-4ee9-8c1c-953c3803764e", 00:33:56.466 "is_configured": true, 00:33:56.466 "data_offset": 0, 00:33:56.466 "data_size": 65536 00:33:56.466 } 00:33:56.466 ] 00:33:56.466 }' 00:33:56.466 09:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:56.466 09:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.032 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:57.032 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:57.032 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.032 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:57.289 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:57.289 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:57.289 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:57.548 [2024-07-12 09:00:32.638573] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:57.548 [2024-07-12 09:00:32.639963] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:57.548 [2024-07-12 09:00:32.719818] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:57.548 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:57.548 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:57.548 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.548 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:57.805 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:57.805 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:57.805 09:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:58.063 [2024-07-12 09:00:33.163959] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:58.063 [2024-07-12 09:00:33.164347] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:33:58.063 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:58.063 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:58.063 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.063 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:58.321 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:58.321 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:58.321 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:33:58.321 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:33:58.321 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:58.321 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:58.580 BaseBdev2 00:33:58.580 09:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:33:58.580 09:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:58.580 09:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:58.580 09:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:58.580 09:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:58.580 09:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:58.580 09:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:58.838 09:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:59.097 [ 00:33:59.097 { 00:33:59.097 "name": "BaseBdev2", 00:33:59.097 "aliases": [ 00:33:59.097 "77ee8271-7b25-423b-9e39-c5f21b01baec" 00:33:59.097 ], 00:33:59.097 "product_name": "Malloc disk", 00:33:59.097 "block_size": 512, 00:33:59.097 "num_blocks": 65536, 00:33:59.097 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:33:59.097 "assigned_rate_limits": { 00:33:59.097 "rw_ios_per_sec": 0, 00:33:59.097 "rw_mbytes_per_sec": 0, 00:33:59.097 "r_mbytes_per_sec": 0, 00:33:59.097 "w_mbytes_per_sec": 0 00:33:59.097 }, 00:33:59.097 "claimed": false, 00:33:59.097 "zoned": false, 00:33:59.097 "supported_io_types": { 00:33:59.097 "read": true, 00:33:59.097 "write": true, 00:33:59.097 "unmap": true, 00:33:59.097 "flush": true, 00:33:59.097 "reset": true, 00:33:59.097 "nvme_admin": false, 00:33:59.097 "nvme_io": false, 00:33:59.097 "nvme_io_md": false, 00:33:59.097 "write_zeroes": true, 00:33:59.097 "zcopy": true, 00:33:59.097 "get_zone_info": false, 00:33:59.097 "zone_management": false, 00:33:59.097 "zone_append": false, 00:33:59.097 "compare": false, 00:33:59.097 "compare_and_write": false, 00:33:59.097 "abort": true, 00:33:59.097 "seek_hole": false, 00:33:59.097 "seek_data": false, 00:33:59.097 "copy": true, 00:33:59.097 "nvme_iov_md": false 00:33:59.097 }, 00:33:59.097 "memory_domains": [ 00:33:59.097 { 00:33:59.097 "dma_device_id": "system", 00:33:59.097 "dma_device_type": 1 00:33:59.097 }, 00:33:59.097 { 00:33:59.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:59.097 "dma_device_type": 2 00:33:59.097 } 00:33:59.097 ], 00:33:59.097 "driver_specific": {} 00:33:59.097 } 00:33:59.097 ] 00:33:59.097 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:59.097 09:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:59.097 09:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:59.097 09:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:59.355 BaseBdev3 00:33:59.355 09:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:33:59.355 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:33:59.355 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:59.355 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:59.355 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:59.355 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:59.355 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:59.621 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:59.887 [ 00:33:59.887 { 00:33:59.887 "name": "BaseBdev3", 00:33:59.887 "aliases": [ 00:33:59.887 "10be987b-c292-4220-92a4-091b485700c8" 00:33:59.887 ], 00:33:59.887 "product_name": "Malloc disk", 00:33:59.887 "block_size": 512, 00:33:59.887 "num_blocks": 65536, 00:33:59.887 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:33:59.887 "assigned_rate_limits": { 00:33:59.887 "rw_ios_per_sec": 0, 00:33:59.887 "rw_mbytes_per_sec": 0, 00:33:59.887 "r_mbytes_per_sec": 0, 00:33:59.887 "w_mbytes_per_sec": 0 00:33:59.887 }, 00:33:59.887 "claimed": false, 00:33:59.887 "zoned": false, 00:33:59.887 "supported_io_types": { 00:33:59.887 "read": true, 00:33:59.887 "write": true, 00:33:59.887 "unmap": true, 00:33:59.887 "flush": true, 00:33:59.887 "reset": true, 00:33:59.887 "nvme_admin": false, 00:33:59.887 "nvme_io": false, 00:33:59.887 "nvme_io_md": false, 00:33:59.887 "write_zeroes": true, 00:33:59.887 "zcopy": true, 00:33:59.887 "get_zone_info": false, 00:33:59.887 "zone_management": false, 00:33:59.887 "zone_append": false, 00:33:59.887 "compare": false, 00:33:59.887 "compare_and_write": false, 00:33:59.887 "abort": true, 00:33:59.887 "seek_hole": false, 00:33:59.887 "seek_data": false, 00:33:59.887 "copy": true, 00:33:59.887 "nvme_iov_md": false 00:33:59.887 }, 00:33:59.887 "memory_domains": [ 00:33:59.887 { 00:33:59.887 "dma_device_id": "system", 00:33:59.887 "dma_device_type": 1 00:33:59.887 }, 00:33:59.887 { 00:33:59.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:59.887 "dma_device_type": 2 00:33:59.887 } 00:33:59.887 ], 00:33:59.887 "driver_specific": {} 00:33:59.887 } 00:33:59.887 ] 00:33:59.887 09:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:59.887 09:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:59.887 09:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:59.887 09:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:00.145 [2024-07-12 09:00:35.189031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:00.145 [2024-07-12 09:00:35.189374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:00.145 [2024-07-12 09:00:35.189536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:00.145 [2024-07-12 09:00:35.191806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.145 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:00.404 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:00.404 "name": "Existed_Raid", 00:34:00.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.404 "strip_size_kb": 64, 00:34:00.404 "state": "configuring", 00:34:00.404 "raid_level": "raid5f", 00:34:00.404 "superblock": false, 00:34:00.404 "num_base_bdevs": 3, 00:34:00.404 "num_base_bdevs_discovered": 2, 00:34:00.404 "num_base_bdevs_operational": 3, 00:34:00.404 "base_bdevs_list": [ 00:34:00.404 { 00:34:00.404 "name": "BaseBdev1", 00:34:00.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.404 "is_configured": false, 00:34:00.404 "data_offset": 0, 00:34:00.404 "data_size": 0 00:34:00.404 }, 00:34:00.404 { 00:34:00.404 "name": "BaseBdev2", 00:34:00.404 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:00.404 "is_configured": true, 00:34:00.404 "data_offset": 0, 00:34:00.404 "data_size": 65536 00:34:00.404 }, 00:34:00.404 { 00:34:00.404 "name": "BaseBdev3", 00:34:00.404 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:00.404 "is_configured": true, 00:34:00.404 "data_offset": 0, 00:34:00.404 "data_size": 65536 00:34:00.404 } 00:34:00.404 ] 00:34:00.404 }' 00:34:00.404 09:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:00.404 09:00:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.968 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:34:01.226 [2024-07-12 09:00:36.365413] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.226 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:01.484 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:01.484 "name": "Existed_Raid", 00:34:01.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.484 "strip_size_kb": 64, 00:34:01.484 "state": "configuring", 00:34:01.484 "raid_level": "raid5f", 00:34:01.484 "superblock": false, 00:34:01.484 "num_base_bdevs": 3, 00:34:01.484 "num_base_bdevs_discovered": 1, 00:34:01.484 "num_base_bdevs_operational": 3, 00:34:01.484 "base_bdevs_list": [ 00:34:01.484 { 00:34:01.484 "name": "BaseBdev1", 00:34:01.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.484 "is_configured": false, 00:34:01.484 "data_offset": 0, 00:34:01.484 "data_size": 0 00:34:01.484 }, 00:34:01.484 { 00:34:01.484 "name": null, 00:34:01.484 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:01.484 "is_configured": false, 00:34:01.484 "data_offset": 0, 00:34:01.484 "data_size": 65536 00:34:01.484 }, 00:34:01.484 { 00:34:01.484 "name": "BaseBdev3", 00:34:01.484 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:01.484 "is_configured": true, 00:34:01.484 "data_offset": 0, 00:34:01.484 "data_size": 65536 00:34:01.484 } 00:34:01.484 ] 00:34:01.484 }' 00:34:01.484 09:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:01.484 09:00:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.417 09:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.417 09:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:02.417 09:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:34:02.417 09:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:02.675 [2024-07-12 09:00:37.813820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:02.675 BaseBdev1 00:34:02.675 09:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:34:02.675 09:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:02.675 09:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:02.675 09:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:02.675 09:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:02.675 09:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:02.675 09:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:02.933 09:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:03.190 [ 00:34:03.190 { 00:34:03.190 "name": "BaseBdev1", 00:34:03.190 "aliases": [ 00:34:03.190 "3ce0f43b-6510-4ed6-9249-948a322e3d7e" 00:34:03.190 ], 00:34:03.190 "product_name": "Malloc disk", 00:34:03.190 "block_size": 512, 00:34:03.190 "num_blocks": 65536, 00:34:03.190 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:03.190 "assigned_rate_limits": { 00:34:03.190 "rw_ios_per_sec": 0, 00:34:03.190 "rw_mbytes_per_sec": 0, 00:34:03.190 "r_mbytes_per_sec": 0, 00:34:03.190 "w_mbytes_per_sec": 0 00:34:03.190 }, 00:34:03.190 "claimed": true, 00:34:03.190 "claim_type": "exclusive_write", 00:34:03.190 "zoned": false, 00:34:03.190 "supported_io_types": { 00:34:03.190 "read": true, 00:34:03.190 "write": true, 00:34:03.190 "unmap": true, 00:34:03.190 "flush": true, 00:34:03.190 "reset": true, 00:34:03.190 "nvme_admin": false, 00:34:03.190 "nvme_io": false, 00:34:03.190 "nvme_io_md": false, 00:34:03.190 "write_zeroes": true, 00:34:03.190 "zcopy": true, 00:34:03.190 "get_zone_info": false, 00:34:03.190 "zone_management": false, 00:34:03.190 "zone_append": false, 00:34:03.190 "compare": false, 00:34:03.190 "compare_and_write": false, 00:34:03.190 "abort": true, 00:34:03.190 "seek_hole": false, 00:34:03.190 "seek_data": false, 00:34:03.190 "copy": true, 00:34:03.190 "nvme_iov_md": false 00:34:03.190 }, 00:34:03.190 "memory_domains": [ 00:34:03.190 { 00:34:03.190 "dma_device_id": "system", 00:34:03.190 "dma_device_type": 1 00:34:03.190 }, 00:34:03.190 { 00:34:03.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:03.190 "dma_device_type": 2 00:34:03.190 } 00:34:03.190 ], 00:34:03.190 "driver_specific": {} 00:34:03.190 } 00:34:03.190 ] 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.190 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:03.447 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:03.447 "name": "Existed_Raid", 00:34:03.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.447 "strip_size_kb": 64, 00:34:03.447 "state": "configuring", 00:34:03.447 "raid_level": "raid5f", 00:34:03.447 "superblock": false, 00:34:03.447 "num_base_bdevs": 3, 00:34:03.447 "num_base_bdevs_discovered": 2, 00:34:03.447 "num_base_bdevs_operational": 3, 00:34:03.447 "base_bdevs_list": [ 00:34:03.447 { 00:34:03.447 "name": "BaseBdev1", 00:34:03.447 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:03.447 "is_configured": true, 00:34:03.447 "data_offset": 0, 00:34:03.447 "data_size": 65536 00:34:03.447 }, 00:34:03.447 { 00:34:03.447 "name": null, 00:34:03.447 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:03.447 "is_configured": false, 00:34:03.447 "data_offset": 0, 00:34:03.447 "data_size": 65536 00:34:03.447 }, 00:34:03.447 { 00:34:03.447 "name": "BaseBdev3", 00:34:03.447 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:03.447 "is_configured": true, 00:34:03.447 "data_offset": 0, 00:34:03.447 "data_size": 65536 00:34:03.447 } 00:34:03.447 ] 00:34:03.447 }' 00:34:03.447 09:00:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:03.447 09:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.380 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.380 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:04.380 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:34:04.380 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:34:04.639 [2024-07-12 09:00:39.774349] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.639 09:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:04.897 09:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:04.897 "name": "Existed_Raid", 00:34:04.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.897 "strip_size_kb": 64, 00:34:04.897 "state": "configuring", 00:34:04.897 "raid_level": "raid5f", 00:34:04.897 "superblock": false, 00:34:04.897 "num_base_bdevs": 3, 00:34:04.897 "num_base_bdevs_discovered": 1, 00:34:04.897 "num_base_bdevs_operational": 3, 00:34:04.897 "base_bdevs_list": [ 00:34:04.897 { 00:34:04.897 "name": "BaseBdev1", 00:34:04.897 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:04.897 "is_configured": true, 00:34:04.897 "data_offset": 0, 00:34:04.897 "data_size": 65536 00:34:04.897 }, 00:34:04.897 { 00:34:04.897 "name": null, 00:34:04.897 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:04.897 "is_configured": false, 00:34:04.897 "data_offset": 0, 00:34:04.897 "data_size": 65536 00:34:04.897 }, 00:34:04.897 { 00:34:04.897 "name": null, 00:34:04.897 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:04.897 "is_configured": false, 00:34:04.897 "data_offset": 0, 00:34:04.897 "data_size": 65536 00:34:04.897 } 00:34:04.897 ] 00:34:04.897 }' 00:34:04.897 09:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:04.897 09:00:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:05.832 09:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.832 09:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:05.832 09:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:34:05.832 09:00:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:06.090 [2024-07-12 09:00:41.162730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.091 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:06.348 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:06.348 "name": "Existed_Raid", 00:34:06.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.348 "strip_size_kb": 64, 00:34:06.348 "state": "configuring", 00:34:06.348 "raid_level": "raid5f", 00:34:06.348 "superblock": false, 00:34:06.348 "num_base_bdevs": 3, 00:34:06.348 "num_base_bdevs_discovered": 2, 00:34:06.348 "num_base_bdevs_operational": 3, 00:34:06.348 "base_bdevs_list": [ 00:34:06.348 { 00:34:06.348 "name": "BaseBdev1", 00:34:06.348 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:06.348 "is_configured": true, 00:34:06.348 "data_offset": 0, 00:34:06.348 "data_size": 65536 00:34:06.348 }, 00:34:06.348 { 00:34:06.348 "name": null, 00:34:06.348 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:06.348 "is_configured": false, 00:34:06.348 "data_offset": 0, 00:34:06.348 "data_size": 65536 00:34:06.348 }, 00:34:06.348 { 00:34:06.348 "name": "BaseBdev3", 00:34:06.348 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:06.348 "is_configured": true, 00:34:06.348 "data_offset": 0, 00:34:06.348 "data_size": 65536 00:34:06.348 } 00:34:06.348 ] 00:34:06.348 }' 00:34:06.348 09:00:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:06.348 09:00:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.280 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.280 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:07.280 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:34:07.280 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:07.538 [2024-07-12 09:00:42.647276] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:07.797 "name": "Existed_Raid", 00:34:07.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.797 "strip_size_kb": 64, 00:34:07.797 "state": "configuring", 00:34:07.797 "raid_level": "raid5f", 00:34:07.797 "superblock": false, 00:34:07.797 "num_base_bdevs": 3, 00:34:07.797 "num_base_bdevs_discovered": 1, 00:34:07.797 "num_base_bdevs_operational": 3, 00:34:07.797 "base_bdevs_list": [ 00:34:07.797 { 00:34:07.797 "name": null, 00:34:07.797 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:07.797 "is_configured": false, 00:34:07.797 "data_offset": 0, 00:34:07.797 "data_size": 65536 00:34:07.797 }, 00:34:07.797 { 00:34:07.797 "name": null, 00:34:07.797 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:07.797 "is_configured": false, 00:34:07.797 "data_offset": 0, 00:34:07.797 "data_size": 65536 00:34:07.797 }, 00:34:07.797 { 00:34:07.797 "name": "BaseBdev3", 00:34:07.797 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:07.797 "is_configured": true, 00:34:07.797 "data_offset": 0, 00:34:07.797 "data_size": 65536 00:34:07.797 } 00:34:07.797 ] 00:34:07.797 }' 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:07.797 09:00:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:08.730 09:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.730 09:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:08.730 09:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:34:08.730 09:00:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:08.988 [2024-07-12 09:00:44.098566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.988 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:09.246 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:09.246 "name": "Existed_Raid", 00:34:09.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.246 "strip_size_kb": 64, 00:34:09.246 "state": "configuring", 00:34:09.246 "raid_level": "raid5f", 00:34:09.246 "superblock": false, 00:34:09.246 "num_base_bdevs": 3, 00:34:09.246 "num_base_bdevs_discovered": 2, 00:34:09.246 "num_base_bdevs_operational": 3, 00:34:09.246 "base_bdevs_list": [ 00:34:09.246 { 00:34:09.246 "name": null, 00:34:09.246 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:09.246 "is_configured": false, 00:34:09.246 "data_offset": 0, 00:34:09.246 "data_size": 65536 00:34:09.246 }, 00:34:09.246 { 00:34:09.246 "name": "BaseBdev2", 00:34:09.246 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:09.246 "is_configured": true, 00:34:09.246 "data_offset": 0, 00:34:09.246 "data_size": 65536 00:34:09.246 }, 00:34:09.246 { 00:34:09.246 "name": "BaseBdev3", 00:34:09.246 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:09.246 "is_configured": true, 00:34:09.246 "data_offset": 0, 00:34:09.246 "data_size": 65536 00:34:09.246 } 00:34:09.246 ] 00:34:09.246 }' 00:34:09.246 09:00:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:09.246 09:00:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:10.181 09:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.181 09:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:10.181 09:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:34:10.181 09:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.181 09:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:10.438 09:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3ce0f43b-6510-4ed6-9249-948a322e3d7e 00:34:10.695 [2024-07-12 09:00:45.780256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:10.695 [2024-07-12 09:00:45.780692] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:34:10.695 [2024-07-12 09:00:45.780846] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:10.695 [2024-07-12 09:00:45.781128] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:34:10.695 [2024-07-12 09:00:45.785938] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:34:10.695 [2024-07-12 09:00:45.786087] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:34:10.695 [2024-07-12 09:00:45.786611] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:10.695 NewBaseBdev 00:34:10.695 09:00:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:34:10.695 09:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:34:10.695 09:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:10.695 09:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:34:10.695 09:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:10.695 09:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:10.695 09:00:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:10.951 09:00:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:11.209 [ 00:34:11.209 { 00:34:11.209 "name": "NewBaseBdev", 00:34:11.209 "aliases": [ 00:34:11.209 "3ce0f43b-6510-4ed6-9249-948a322e3d7e" 00:34:11.209 ], 00:34:11.209 "product_name": "Malloc disk", 00:34:11.209 "block_size": 512, 00:34:11.209 "num_blocks": 65536, 00:34:11.209 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:11.209 "assigned_rate_limits": { 00:34:11.209 "rw_ios_per_sec": 0, 00:34:11.209 "rw_mbytes_per_sec": 0, 00:34:11.209 "r_mbytes_per_sec": 0, 00:34:11.209 "w_mbytes_per_sec": 0 00:34:11.209 }, 00:34:11.209 "claimed": true, 00:34:11.209 "claim_type": "exclusive_write", 00:34:11.209 "zoned": false, 00:34:11.209 "supported_io_types": { 00:34:11.209 "read": true, 00:34:11.209 "write": true, 00:34:11.209 "unmap": true, 00:34:11.209 "flush": true, 00:34:11.209 "reset": true, 00:34:11.209 "nvme_admin": false, 00:34:11.209 "nvme_io": false, 00:34:11.209 "nvme_io_md": false, 00:34:11.209 "write_zeroes": true, 00:34:11.209 "zcopy": true, 00:34:11.209 "get_zone_info": false, 00:34:11.209 "zone_management": false, 00:34:11.209 "zone_append": false, 00:34:11.209 "compare": false, 00:34:11.209 "compare_and_write": false, 00:34:11.209 "abort": true, 00:34:11.209 "seek_hole": false, 00:34:11.209 "seek_data": false, 00:34:11.209 "copy": true, 00:34:11.209 "nvme_iov_md": false 00:34:11.209 }, 00:34:11.209 "memory_domains": [ 00:34:11.209 { 00:34:11.209 "dma_device_id": "system", 00:34:11.209 "dma_device_type": 1 00:34:11.209 }, 00:34:11.209 { 00:34:11.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:11.209 "dma_device_type": 2 00:34:11.209 } 00:34:11.209 ], 00:34:11.209 "driver_specific": {} 00:34:11.209 } 00:34:11.209 ] 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:11.209 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:11.210 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:11.210 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:11.210 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.467 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:11.467 "name": "Existed_Raid", 00:34:11.467 "uuid": "eb0f745c-9e72-418f-ac6b-badbebe2cefe", 00:34:11.467 "strip_size_kb": 64, 00:34:11.467 "state": "online", 00:34:11.467 "raid_level": "raid5f", 00:34:11.467 "superblock": false, 00:34:11.467 "num_base_bdevs": 3, 00:34:11.467 "num_base_bdevs_discovered": 3, 00:34:11.467 "num_base_bdevs_operational": 3, 00:34:11.467 "base_bdevs_list": [ 00:34:11.467 { 00:34:11.467 "name": "NewBaseBdev", 00:34:11.467 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:11.467 "is_configured": true, 00:34:11.467 "data_offset": 0, 00:34:11.467 "data_size": 65536 00:34:11.467 }, 00:34:11.467 { 00:34:11.467 "name": "BaseBdev2", 00:34:11.467 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:11.467 "is_configured": true, 00:34:11.467 "data_offset": 0, 00:34:11.467 "data_size": 65536 00:34:11.467 }, 00:34:11.467 { 00:34:11.467 "name": "BaseBdev3", 00:34:11.468 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:11.468 "is_configured": true, 00:34:11.468 "data_offset": 0, 00:34:11.468 "data_size": 65536 00:34:11.468 } 00:34:11.468 ] 00:34:11.468 }' 00:34:11.468 09:00:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:11.468 09:00:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:12.034 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:12.295 [2024-07-12 09:00:47.408741] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:12.295 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:12.295 "name": "Existed_Raid", 00:34:12.295 "aliases": [ 00:34:12.295 "eb0f745c-9e72-418f-ac6b-badbebe2cefe" 00:34:12.295 ], 00:34:12.295 "product_name": "Raid Volume", 00:34:12.295 "block_size": 512, 00:34:12.295 "num_blocks": 131072, 00:34:12.295 "uuid": "eb0f745c-9e72-418f-ac6b-badbebe2cefe", 00:34:12.295 "assigned_rate_limits": { 00:34:12.295 "rw_ios_per_sec": 0, 00:34:12.295 "rw_mbytes_per_sec": 0, 00:34:12.295 "r_mbytes_per_sec": 0, 00:34:12.295 "w_mbytes_per_sec": 0 00:34:12.295 }, 00:34:12.295 "claimed": false, 00:34:12.295 "zoned": false, 00:34:12.295 "supported_io_types": { 00:34:12.295 "read": true, 00:34:12.295 "write": true, 00:34:12.295 "unmap": false, 00:34:12.295 "flush": false, 00:34:12.295 "reset": true, 00:34:12.295 "nvme_admin": false, 00:34:12.295 "nvme_io": false, 00:34:12.296 "nvme_io_md": false, 00:34:12.296 "write_zeroes": true, 00:34:12.296 "zcopy": false, 00:34:12.296 "get_zone_info": false, 00:34:12.296 "zone_management": false, 00:34:12.296 "zone_append": false, 00:34:12.296 "compare": false, 00:34:12.296 "compare_and_write": false, 00:34:12.296 "abort": false, 00:34:12.296 "seek_hole": false, 00:34:12.296 "seek_data": false, 00:34:12.296 "copy": false, 00:34:12.296 "nvme_iov_md": false 00:34:12.296 }, 00:34:12.296 "driver_specific": { 00:34:12.296 "raid": { 00:34:12.296 "uuid": "eb0f745c-9e72-418f-ac6b-badbebe2cefe", 00:34:12.296 "strip_size_kb": 64, 00:34:12.296 "state": "online", 00:34:12.296 "raid_level": "raid5f", 00:34:12.296 "superblock": false, 00:34:12.296 "num_base_bdevs": 3, 00:34:12.296 "num_base_bdevs_discovered": 3, 00:34:12.296 "num_base_bdevs_operational": 3, 00:34:12.296 "base_bdevs_list": [ 00:34:12.296 { 00:34:12.296 "name": "NewBaseBdev", 00:34:12.296 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:12.296 "is_configured": true, 00:34:12.296 "data_offset": 0, 00:34:12.296 "data_size": 65536 00:34:12.296 }, 00:34:12.296 { 00:34:12.296 "name": "BaseBdev2", 00:34:12.296 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:12.296 "is_configured": true, 00:34:12.296 "data_offset": 0, 00:34:12.296 "data_size": 65536 00:34:12.296 }, 00:34:12.296 { 00:34:12.296 "name": "BaseBdev3", 00:34:12.296 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:12.296 "is_configured": true, 00:34:12.296 "data_offset": 0, 00:34:12.296 "data_size": 65536 00:34:12.296 } 00:34:12.296 ] 00:34:12.296 } 00:34:12.296 } 00:34:12.296 }' 00:34:12.296 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:12.296 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:12.296 BaseBdev2 00:34:12.296 BaseBdev3' 00:34:12.296 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:12.296 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:12.296 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:12.567 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:12.567 "name": "NewBaseBdev", 00:34:12.567 "aliases": [ 00:34:12.567 "3ce0f43b-6510-4ed6-9249-948a322e3d7e" 00:34:12.567 ], 00:34:12.567 "product_name": "Malloc disk", 00:34:12.567 "block_size": 512, 00:34:12.567 "num_blocks": 65536, 00:34:12.567 "uuid": "3ce0f43b-6510-4ed6-9249-948a322e3d7e", 00:34:12.567 "assigned_rate_limits": { 00:34:12.567 "rw_ios_per_sec": 0, 00:34:12.567 "rw_mbytes_per_sec": 0, 00:34:12.567 "r_mbytes_per_sec": 0, 00:34:12.567 "w_mbytes_per_sec": 0 00:34:12.567 }, 00:34:12.567 "claimed": true, 00:34:12.567 "claim_type": "exclusive_write", 00:34:12.567 "zoned": false, 00:34:12.567 "supported_io_types": { 00:34:12.567 "read": true, 00:34:12.567 "write": true, 00:34:12.567 "unmap": true, 00:34:12.567 "flush": true, 00:34:12.567 "reset": true, 00:34:12.567 "nvme_admin": false, 00:34:12.567 "nvme_io": false, 00:34:12.567 "nvme_io_md": false, 00:34:12.567 "write_zeroes": true, 00:34:12.567 "zcopy": true, 00:34:12.567 "get_zone_info": false, 00:34:12.567 "zone_management": false, 00:34:12.567 "zone_append": false, 00:34:12.567 "compare": false, 00:34:12.567 "compare_and_write": false, 00:34:12.567 "abort": true, 00:34:12.567 "seek_hole": false, 00:34:12.567 "seek_data": false, 00:34:12.567 "copy": true, 00:34:12.567 "nvme_iov_md": false 00:34:12.567 }, 00:34:12.567 "memory_domains": [ 00:34:12.567 { 00:34:12.567 "dma_device_id": "system", 00:34:12.567 "dma_device_type": 1 00:34:12.567 }, 00:34:12.567 { 00:34:12.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:12.567 "dma_device_type": 2 00:34:12.567 } 00:34:12.567 ], 00:34:12.567 "driver_specific": {} 00:34:12.567 }' 00:34:12.567 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:12.840 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:12.840 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:12.840 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:12.840 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:12.840 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:12.840 09:00:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:12.840 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:13.103 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:13.103 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:13.103 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:13.103 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:13.103 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:13.103 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:13.103 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:13.392 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:13.392 "name": "BaseBdev2", 00:34:13.392 "aliases": [ 00:34:13.392 "77ee8271-7b25-423b-9e39-c5f21b01baec" 00:34:13.392 ], 00:34:13.392 "product_name": "Malloc disk", 00:34:13.392 "block_size": 512, 00:34:13.392 "num_blocks": 65536, 00:34:13.392 "uuid": "77ee8271-7b25-423b-9e39-c5f21b01baec", 00:34:13.392 "assigned_rate_limits": { 00:34:13.392 "rw_ios_per_sec": 0, 00:34:13.392 "rw_mbytes_per_sec": 0, 00:34:13.392 "r_mbytes_per_sec": 0, 00:34:13.392 "w_mbytes_per_sec": 0 00:34:13.392 }, 00:34:13.392 "claimed": true, 00:34:13.392 "claim_type": "exclusive_write", 00:34:13.392 "zoned": false, 00:34:13.392 "supported_io_types": { 00:34:13.392 "read": true, 00:34:13.392 "write": true, 00:34:13.392 "unmap": true, 00:34:13.392 "flush": true, 00:34:13.392 "reset": true, 00:34:13.392 "nvme_admin": false, 00:34:13.392 "nvme_io": false, 00:34:13.393 "nvme_io_md": false, 00:34:13.393 "write_zeroes": true, 00:34:13.393 "zcopy": true, 00:34:13.393 "get_zone_info": false, 00:34:13.393 "zone_management": false, 00:34:13.393 "zone_append": false, 00:34:13.393 "compare": false, 00:34:13.393 "compare_and_write": false, 00:34:13.393 "abort": true, 00:34:13.393 "seek_hole": false, 00:34:13.393 "seek_data": false, 00:34:13.393 "copy": true, 00:34:13.393 "nvme_iov_md": false 00:34:13.393 }, 00:34:13.393 "memory_domains": [ 00:34:13.393 { 00:34:13.393 "dma_device_id": "system", 00:34:13.393 "dma_device_type": 1 00:34:13.393 }, 00:34:13.393 { 00:34:13.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.393 "dma_device_type": 2 00:34:13.393 } 00:34:13.393 ], 00:34:13.393 "driver_specific": {} 00:34:13.393 }' 00:34:13.393 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:13.393 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:13.393 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:13.393 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:13.655 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:13.912 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:13.912 09:00:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:13.912 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:13.912 "name": "BaseBdev3", 00:34:13.912 "aliases": [ 00:34:13.912 "10be987b-c292-4220-92a4-091b485700c8" 00:34:13.912 ], 00:34:13.912 "product_name": "Malloc disk", 00:34:13.912 "block_size": 512, 00:34:13.912 "num_blocks": 65536, 00:34:13.912 "uuid": "10be987b-c292-4220-92a4-091b485700c8", 00:34:13.912 "assigned_rate_limits": { 00:34:13.912 "rw_ios_per_sec": 0, 00:34:13.912 "rw_mbytes_per_sec": 0, 00:34:13.912 "r_mbytes_per_sec": 0, 00:34:13.912 "w_mbytes_per_sec": 0 00:34:13.912 }, 00:34:13.912 "claimed": true, 00:34:13.912 "claim_type": "exclusive_write", 00:34:13.912 "zoned": false, 00:34:13.912 "supported_io_types": { 00:34:13.912 "read": true, 00:34:13.912 "write": true, 00:34:13.912 "unmap": true, 00:34:13.912 "flush": true, 00:34:13.912 "reset": true, 00:34:13.912 "nvme_admin": false, 00:34:13.912 "nvme_io": false, 00:34:13.912 "nvme_io_md": false, 00:34:13.912 "write_zeroes": true, 00:34:13.913 "zcopy": true, 00:34:13.913 "get_zone_info": false, 00:34:13.913 "zone_management": false, 00:34:13.913 "zone_append": false, 00:34:13.913 "compare": false, 00:34:13.913 "compare_and_write": false, 00:34:13.913 "abort": true, 00:34:13.913 "seek_hole": false, 00:34:13.913 "seek_data": false, 00:34:13.913 "copy": true, 00:34:13.913 "nvme_iov_md": false 00:34:13.913 }, 00:34:13.913 "memory_domains": [ 00:34:13.913 { 00:34:13.913 "dma_device_id": "system", 00:34:13.913 "dma_device_type": 1 00:34:13.913 }, 00:34:13.913 { 00:34:13.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.913 "dma_device_type": 2 00:34:13.913 } 00:34:13.913 ], 00:34:13.913 "driver_specific": {} 00:34:13.913 }' 00:34:13.913 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:14.170 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:14.170 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:14.170 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:14.170 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:14.170 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:14.170 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:14.170 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:14.428 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:14.428 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:14.428 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:14.428 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:14.428 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:14.686 [2024-07-12 09:00:49.773106] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:14.686 [2024-07-12 09:00:49.773371] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:14.686 [2024-07-12 09:00:49.773553] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:14.686 [2024-07-12 09:00:49.773959] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:14.686 [2024-07-12 09:00:49.774085] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 152557 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 152557 ']' 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 152557 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 152557 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 152557' 00:34:14.686 killing process with pid 152557 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 152557 00:34:14.686 [2024-07-12 09:00:49.821606] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:14.686 09:00:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 152557 00:34:14.944 [2024-07-12 09:00:50.045826] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:15.892 09:00:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:34:15.892 00:34:15.892 real 0m32.006s 00:34:15.892 user 1m0.046s 00:34:15.892 sys 0m3.569s 00:34:15.892 09:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:15.892 09:00:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.892 ************************************ 00:34:15.892 END TEST raid5f_state_function_test 00:34:15.892 ************************************ 00:34:16.149 09:00:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:16.149 09:00:51 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:34:16.149 09:00:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:16.149 09:00:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:16.149 09:00:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:16.149 ************************************ 00:34:16.149 START TEST raid5f_state_function_test_sb 00:34:16.149 ************************************ 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 true 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:16.149 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=153586 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 153586' 00:34:16.150 Process raid pid: 153586 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 153586 /var/tmp/spdk-raid.sock 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 153586 ']' 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:16.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:16.150 09:00:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:16.150 [2024-07-12 09:00:51.199954] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:34:16.150 [2024-07-12 09:00:51.200141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.408 [2024-07-12 09:00:51.361087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.408 [2024-07-12 09:00:51.594055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.666 [2024-07-12 09:00:51.782994] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:17.234 09:00:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:17.234 09:00:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:34:17.234 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:17.234 [2024-07-12 09:00:52.323329] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:17.234 [2024-07-12 09:00:52.323438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:17.235 [2024-07-12 09:00:52.323453] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:17.235 [2024-07-12 09:00:52.323481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:17.235 [2024-07-12 09:00:52.323490] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:17.235 [2024-07-12 09:00:52.323505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.235 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.492 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:17.492 "name": "Existed_Raid", 00:34:17.492 "uuid": "34606fbb-8bf8-420e-a8e8-1ddc0a255b52", 00:34:17.492 "strip_size_kb": 64, 00:34:17.492 "state": "configuring", 00:34:17.492 "raid_level": "raid5f", 00:34:17.492 "superblock": true, 00:34:17.492 "num_base_bdevs": 3, 00:34:17.492 "num_base_bdevs_discovered": 0, 00:34:17.492 "num_base_bdevs_operational": 3, 00:34:17.492 "base_bdevs_list": [ 00:34:17.492 { 00:34:17.492 "name": "BaseBdev1", 00:34:17.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.492 "is_configured": false, 00:34:17.492 "data_offset": 0, 00:34:17.492 "data_size": 0 00:34:17.492 }, 00:34:17.492 { 00:34:17.492 "name": "BaseBdev2", 00:34:17.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.492 "is_configured": false, 00:34:17.492 "data_offset": 0, 00:34:17.492 "data_size": 0 00:34:17.492 }, 00:34:17.492 { 00:34:17.492 "name": "BaseBdev3", 00:34:17.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.492 "is_configured": false, 00:34:17.492 "data_offset": 0, 00:34:17.492 "data_size": 0 00:34:17.492 } 00:34:17.492 ] 00:34:17.492 }' 00:34:17.492 09:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:17.492 09:00:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:18.426 09:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:18.426 [2024-07-12 09:00:53.555430] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:18.426 [2024-07-12 09:00:53.555500] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:18.426 09:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:18.684 [2024-07-12 09:00:53.815518] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:18.684 [2024-07-12 09:00:53.815612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:18.684 [2024-07-12 09:00:53.815626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:18.684 [2024-07-12 09:00:53.815644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:18.684 [2024-07-12 09:00:53.815652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:18.684 [2024-07-12 09:00:53.815675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:18.684 09:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:18.942 [2024-07-12 09:00:54.084859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:18.942 BaseBdev1 00:34:18.942 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:18.942 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:18.942 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:18.942 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:18.942 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:18.942 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:18.942 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:19.200 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:19.458 [ 00:34:19.458 { 00:34:19.458 "name": "BaseBdev1", 00:34:19.458 "aliases": [ 00:34:19.458 "00dc320e-65c0-4fce-93f4-13d46bfe746c" 00:34:19.458 ], 00:34:19.458 "product_name": "Malloc disk", 00:34:19.458 "block_size": 512, 00:34:19.458 "num_blocks": 65536, 00:34:19.458 "uuid": "00dc320e-65c0-4fce-93f4-13d46bfe746c", 00:34:19.458 "assigned_rate_limits": { 00:34:19.458 "rw_ios_per_sec": 0, 00:34:19.458 "rw_mbytes_per_sec": 0, 00:34:19.458 "r_mbytes_per_sec": 0, 00:34:19.458 "w_mbytes_per_sec": 0 00:34:19.458 }, 00:34:19.458 "claimed": true, 00:34:19.458 "claim_type": "exclusive_write", 00:34:19.458 "zoned": false, 00:34:19.458 "supported_io_types": { 00:34:19.458 "read": true, 00:34:19.458 "write": true, 00:34:19.458 "unmap": true, 00:34:19.458 "flush": true, 00:34:19.458 "reset": true, 00:34:19.458 "nvme_admin": false, 00:34:19.458 "nvme_io": false, 00:34:19.458 "nvme_io_md": false, 00:34:19.458 "write_zeroes": true, 00:34:19.458 "zcopy": true, 00:34:19.458 "get_zone_info": false, 00:34:19.458 "zone_management": false, 00:34:19.458 "zone_append": false, 00:34:19.458 "compare": false, 00:34:19.458 "compare_and_write": false, 00:34:19.458 "abort": true, 00:34:19.458 "seek_hole": false, 00:34:19.458 "seek_data": false, 00:34:19.458 "copy": true, 00:34:19.458 "nvme_iov_md": false 00:34:19.458 }, 00:34:19.458 "memory_domains": [ 00:34:19.458 { 00:34:19.458 "dma_device_id": "system", 00:34:19.458 "dma_device_type": 1 00:34:19.458 }, 00:34:19.458 { 00:34:19.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.458 "dma_device_type": 2 00:34:19.458 } 00:34:19.458 ], 00:34:19.458 "driver_specific": {} 00:34:19.458 } 00:34:19.458 ] 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.458 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:19.716 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:19.716 "name": "Existed_Raid", 00:34:19.716 "uuid": "24f0f288-217b-4146-92cb-44859a0bfe65", 00:34:19.716 "strip_size_kb": 64, 00:34:19.716 "state": "configuring", 00:34:19.716 "raid_level": "raid5f", 00:34:19.716 "superblock": true, 00:34:19.716 "num_base_bdevs": 3, 00:34:19.716 "num_base_bdevs_discovered": 1, 00:34:19.716 "num_base_bdevs_operational": 3, 00:34:19.716 "base_bdevs_list": [ 00:34:19.716 { 00:34:19.716 "name": "BaseBdev1", 00:34:19.716 "uuid": "00dc320e-65c0-4fce-93f4-13d46bfe746c", 00:34:19.716 "is_configured": true, 00:34:19.716 "data_offset": 2048, 00:34:19.716 "data_size": 63488 00:34:19.716 }, 00:34:19.716 { 00:34:19.716 "name": "BaseBdev2", 00:34:19.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.716 "is_configured": false, 00:34:19.716 "data_offset": 0, 00:34:19.716 "data_size": 0 00:34:19.716 }, 00:34:19.716 { 00:34:19.717 "name": "BaseBdev3", 00:34:19.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.717 "is_configured": false, 00:34:19.717 "data_offset": 0, 00:34:19.717 "data_size": 0 00:34:19.717 } 00:34:19.717 ] 00:34:19.717 }' 00:34:19.717 09:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:19.717 09:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:20.653 09:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:20.653 [2024-07-12 09:00:55.769349] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:20.653 [2024-07-12 09:00:55.769466] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:34:20.653 09:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:20.911 [2024-07-12 09:00:56.037431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:20.911 [2024-07-12 09:00:56.039585] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:20.911 [2024-07-12 09:00:56.039664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:20.911 [2024-07-12 09:00:56.039678] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:20.911 [2024-07-12 09:00:56.039749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.911 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:21.170 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:21.170 "name": "Existed_Raid", 00:34:21.170 "uuid": "1c0fd4d5-236d-427b-b031-c00e4ab4dc61", 00:34:21.170 "strip_size_kb": 64, 00:34:21.170 "state": "configuring", 00:34:21.170 "raid_level": "raid5f", 00:34:21.170 "superblock": true, 00:34:21.170 "num_base_bdevs": 3, 00:34:21.170 "num_base_bdevs_discovered": 1, 00:34:21.170 "num_base_bdevs_operational": 3, 00:34:21.170 "base_bdevs_list": [ 00:34:21.170 { 00:34:21.170 "name": "BaseBdev1", 00:34:21.170 "uuid": "00dc320e-65c0-4fce-93f4-13d46bfe746c", 00:34:21.170 "is_configured": true, 00:34:21.170 "data_offset": 2048, 00:34:21.170 "data_size": 63488 00:34:21.170 }, 00:34:21.170 { 00:34:21.170 "name": "BaseBdev2", 00:34:21.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.170 "is_configured": false, 00:34:21.170 "data_offset": 0, 00:34:21.170 "data_size": 0 00:34:21.170 }, 00:34:21.170 { 00:34:21.170 "name": "BaseBdev3", 00:34:21.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.170 "is_configured": false, 00:34:21.170 "data_offset": 0, 00:34:21.170 "data_size": 0 00:34:21.170 } 00:34:21.170 ] 00:34:21.170 }' 00:34:21.170 09:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:21.170 09:00:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:22.106 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:22.364 [2024-07-12 09:00:57.312135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:22.364 BaseBdev2 00:34:22.364 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:22.364 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:22.364 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:22.364 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:22.364 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:22.364 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:22.364 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:22.622 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:22.881 [ 00:34:22.881 { 00:34:22.881 "name": "BaseBdev2", 00:34:22.881 "aliases": [ 00:34:22.881 "58a940e1-be9b-4cde-b79b-8b48aeb17b20" 00:34:22.881 ], 00:34:22.881 "product_name": "Malloc disk", 00:34:22.881 "block_size": 512, 00:34:22.881 "num_blocks": 65536, 00:34:22.881 "uuid": "58a940e1-be9b-4cde-b79b-8b48aeb17b20", 00:34:22.881 "assigned_rate_limits": { 00:34:22.881 "rw_ios_per_sec": 0, 00:34:22.881 "rw_mbytes_per_sec": 0, 00:34:22.881 "r_mbytes_per_sec": 0, 00:34:22.881 "w_mbytes_per_sec": 0 00:34:22.881 }, 00:34:22.881 "claimed": true, 00:34:22.881 "claim_type": "exclusive_write", 00:34:22.881 "zoned": false, 00:34:22.881 "supported_io_types": { 00:34:22.881 "read": true, 00:34:22.881 "write": true, 00:34:22.881 "unmap": true, 00:34:22.881 "flush": true, 00:34:22.881 "reset": true, 00:34:22.881 "nvme_admin": false, 00:34:22.881 "nvme_io": false, 00:34:22.881 "nvme_io_md": false, 00:34:22.881 "write_zeroes": true, 00:34:22.881 "zcopy": true, 00:34:22.881 "get_zone_info": false, 00:34:22.881 "zone_management": false, 00:34:22.881 "zone_append": false, 00:34:22.881 "compare": false, 00:34:22.881 "compare_and_write": false, 00:34:22.881 "abort": true, 00:34:22.881 "seek_hole": false, 00:34:22.881 "seek_data": false, 00:34:22.881 "copy": true, 00:34:22.881 "nvme_iov_md": false 00:34:22.881 }, 00:34:22.881 "memory_domains": [ 00:34:22.881 { 00:34:22.881 "dma_device_id": "system", 00:34:22.881 "dma_device_type": 1 00:34:22.881 }, 00:34:22.881 { 00:34:22.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:22.881 "dma_device_type": 2 00:34:22.881 } 00:34:22.881 ], 00:34:22.881 "driver_specific": {} 00:34:22.881 } 00:34:22.881 ] 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.881 09:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.140 09:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:23.140 "name": "Existed_Raid", 00:34:23.140 "uuid": "1c0fd4d5-236d-427b-b031-c00e4ab4dc61", 00:34:23.140 "strip_size_kb": 64, 00:34:23.140 "state": "configuring", 00:34:23.140 "raid_level": "raid5f", 00:34:23.140 "superblock": true, 00:34:23.140 "num_base_bdevs": 3, 00:34:23.140 "num_base_bdevs_discovered": 2, 00:34:23.140 "num_base_bdevs_operational": 3, 00:34:23.140 "base_bdevs_list": [ 00:34:23.140 { 00:34:23.140 "name": "BaseBdev1", 00:34:23.140 "uuid": "00dc320e-65c0-4fce-93f4-13d46bfe746c", 00:34:23.140 "is_configured": true, 00:34:23.140 "data_offset": 2048, 00:34:23.140 "data_size": 63488 00:34:23.140 }, 00:34:23.140 { 00:34:23.140 "name": "BaseBdev2", 00:34:23.140 "uuid": "58a940e1-be9b-4cde-b79b-8b48aeb17b20", 00:34:23.140 "is_configured": true, 00:34:23.140 "data_offset": 2048, 00:34:23.140 "data_size": 63488 00:34:23.140 }, 00:34:23.140 { 00:34:23.140 "name": "BaseBdev3", 00:34:23.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.140 "is_configured": false, 00:34:23.140 "data_offset": 0, 00:34:23.140 "data_size": 0 00:34:23.140 } 00:34:23.140 ] 00:34:23.140 }' 00:34:23.140 09:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:23.140 09:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:23.716 09:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:23.980 [2024-07-12 09:00:59.027642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:23.980 [2024-07-12 09:00:59.027944] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:34:23.980 BaseBdev3 00:34:23.980 [2024-07-12 09:00:59.027977] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:23.980 [2024-07-12 09:00:59.028435] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:34:23.980 [2024-07-12 09:00:59.033695] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:34:23.980 [2024-07-12 09:00:59.033724] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:34:23.980 [2024-07-12 09:00:59.033911] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:23.980 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:34:23.980 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:23.980 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:23.980 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:23.980 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:23.980 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:23.980 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:24.238 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:24.496 [ 00:34:24.496 { 00:34:24.496 "name": "BaseBdev3", 00:34:24.496 "aliases": [ 00:34:24.496 "0c7c1199-b717-48bf-b587-40a04bb8f493" 00:34:24.496 ], 00:34:24.496 "product_name": "Malloc disk", 00:34:24.496 "block_size": 512, 00:34:24.496 "num_blocks": 65536, 00:34:24.496 "uuid": "0c7c1199-b717-48bf-b587-40a04bb8f493", 00:34:24.496 "assigned_rate_limits": { 00:34:24.496 "rw_ios_per_sec": 0, 00:34:24.496 "rw_mbytes_per_sec": 0, 00:34:24.496 "r_mbytes_per_sec": 0, 00:34:24.496 "w_mbytes_per_sec": 0 00:34:24.496 }, 00:34:24.496 "claimed": true, 00:34:24.496 "claim_type": "exclusive_write", 00:34:24.496 "zoned": false, 00:34:24.496 "supported_io_types": { 00:34:24.496 "read": true, 00:34:24.496 "write": true, 00:34:24.496 "unmap": true, 00:34:24.496 "flush": true, 00:34:24.496 "reset": true, 00:34:24.496 "nvme_admin": false, 00:34:24.496 "nvme_io": false, 00:34:24.496 "nvme_io_md": false, 00:34:24.496 "write_zeroes": true, 00:34:24.496 "zcopy": true, 00:34:24.496 "get_zone_info": false, 00:34:24.496 "zone_management": false, 00:34:24.496 "zone_append": false, 00:34:24.496 "compare": false, 00:34:24.496 "compare_and_write": false, 00:34:24.496 "abort": true, 00:34:24.496 "seek_hole": false, 00:34:24.496 "seek_data": false, 00:34:24.496 "copy": true, 00:34:24.496 "nvme_iov_md": false 00:34:24.496 }, 00:34:24.496 "memory_domains": [ 00:34:24.496 { 00:34:24.496 "dma_device_id": "system", 00:34:24.496 "dma_device_type": 1 00:34:24.496 }, 00:34:24.496 { 00:34:24.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.496 "dma_device_type": 2 00:34:24.496 } 00:34:24.496 ], 00:34:24.496 "driver_specific": {} 00:34:24.496 } 00:34:24.496 ] 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.496 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:24.753 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:24.753 "name": "Existed_Raid", 00:34:24.753 "uuid": "1c0fd4d5-236d-427b-b031-c00e4ab4dc61", 00:34:24.753 "strip_size_kb": 64, 00:34:24.753 "state": "online", 00:34:24.753 "raid_level": "raid5f", 00:34:24.753 "superblock": true, 00:34:24.753 "num_base_bdevs": 3, 00:34:24.754 "num_base_bdevs_discovered": 3, 00:34:24.754 "num_base_bdevs_operational": 3, 00:34:24.754 "base_bdevs_list": [ 00:34:24.754 { 00:34:24.754 "name": "BaseBdev1", 00:34:24.754 "uuid": "00dc320e-65c0-4fce-93f4-13d46bfe746c", 00:34:24.754 "is_configured": true, 00:34:24.754 "data_offset": 2048, 00:34:24.754 "data_size": 63488 00:34:24.754 }, 00:34:24.754 { 00:34:24.754 "name": "BaseBdev2", 00:34:24.754 "uuid": "58a940e1-be9b-4cde-b79b-8b48aeb17b20", 00:34:24.754 "is_configured": true, 00:34:24.754 "data_offset": 2048, 00:34:24.754 "data_size": 63488 00:34:24.754 }, 00:34:24.754 { 00:34:24.754 "name": "BaseBdev3", 00:34:24.754 "uuid": "0c7c1199-b717-48bf-b587-40a04bb8f493", 00:34:24.754 "is_configured": true, 00:34:24.754 "data_offset": 2048, 00:34:24.754 "data_size": 63488 00:34:24.754 } 00:34:24.754 ] 00:34:24.754 }' 00:34:24.754 09:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:24.754 09:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:25.319 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:25.320 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:25.320 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:25.320 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:25.320 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:25.320 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:34:25.320 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:25.320 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:25.577 [2024-07-12 09:01:00.660003] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:25.577 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:25.577 "name": "Existed_Raid", 00:34:25.577 "aliases": [ 00:34:25.577 "1c0fd4d5-236d-427b-b031-c00e4ab4dc61" 00:34:25.577 ], 00:34:25.577 "product_name": "Raid Volume", 00:34:25.578 "block_size": 512, 00:34:25.578 "num_blocks": 126976, 00:34:25.578 "uuid": "1c0fd4d5-236d-427b-b031-c00e4ab4dc61", 00:34:25.578 "assigned_rate_limits": { 00:34:25.578 "rw_ios_per_sec": 0, 00:34:25.578 "rw_mbytes_per_sec": 0, 00:34:25.578 "r_mbytes_per_sec": 0, 00:34:25.578 "w_mbytes_per_sec": 0 00:34:25.578 }, 00:34:25.578 "claimed": false, 00:34:25.578 "zoned": false, 00:34:25.578 "supported_io_types": { 00:34:25.578 "read": true, 00:34:25.578 "write": true, 00:34:25.578 "unmap": false, 00:34:25.578 "flush": false, 00:34:25.578 "reset": true, 00:34:25.578 "nvme_admin": false, 00:34:25.578 "nvme_io": false, 00:34:25.578 "nvme_io_md": false, 00:34:25.578 "write_zeroes": true, 00:34:25.578 "zcopy": false, 00:34:25.578 "get_zone_info": false, 00:34:25.578 "zone_management": false, 00:34:25.578 "zone_append": false, 00:34:25.578 "compare": false, 00:34:25.578 "compare_and_write": false, 00:34:25.578 "abort": false, 00:34:25.578 "seek_hole": false, 00:34:25.578 "seek_data": false, 00:34:25.578 "copy": false, 00:34:25.578 "nvme_iov_md": false 00:34:25.578 }, 00:34:25.578 "driver_specific": { 00:34:25.578 "raid": { 00:34:25.578 "uuid": "1c0fd4d5-236d-427b-b031-c00e4ab4dc61", 00:34:25.578 "strip_size_kb": 64, 00:34:25.578 "state": "online", 00:34:25.578 "raid_level": "raid5f", 00:34:25.578 "superblock": true, 00:34:25.578 "num_base_bdevs": 3, 00:34:25.578 "num_base_bdevs_discovered": 3, 00:34:25.578 "num_base_bdevs_operational": 3, 00:34:25.578 "base_bdevs_list": [ 00:34:25.578 { 00:34:25.578 "name": "BaseBdev1", 00:34:25.578 "uuid": "00dc320e-65c0-4fce-93f4-13d46bfe746c", 00:34:25.578 "is_configured": true, 00:34:25.578 "data_offset": 2048, 00:34:25.578 "data_size": 63488 00:34:25.578 }, 00:34:25.578 { 00:34:25.578 "name": "BaseBdev2", 00:34:25.578 "uuid": "58a940e1-be9b-4cde-b79b-8b48aeb17b20", 00:34:25.578 "is_configured": true, 00:34:25.578 "data_offset": 2048, 00:34:25.578 "data_size": 63488 00:34:25.578 }, 00:34:25.578 { 00:34:25.578 "name": "BaseBdev3", 00:34:25.578 "uuid": "0c7c1199-b717-48bf-b587-40a04bb8f493", 00:34:25.578 "is_configured": true, 00:34:25.578 "data_offset": 2048, 00:34:25.578 "data_size": 63488 00:34:25.578 } 00:34:25.578 ] 00:34:25.578 } 00:34:25.578 } 00:34:25.578 }' 00:34:25.578 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:25.578 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:25.578 BaseBdev2 00:34:25.578 BaseBdev3' 00:34:25.578 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:25.578 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:25.578 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:25.836 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:25.836 "name": "BaseBdev1", 00:34:25.836 "aliases": [ 00:34:25.836 "00dc320e-65c0-4fce-93f4-13d46bfe746c" 00:34:25.836 ], 00:34:25.836 "product_name": "Malloc disk", 00:34:25.836 "block_size": 512, 00:34:25.836 "num_blocks": 65536, 00:34:25.836 "uuid": "00dc320e-65c0-4fce-93f4-13d46bfe746c", 00:34:25.836 "assigned_rate_limits": { 00:34:25.836 "rw_ios_per_sec": 0, 00:34:25.836 "rw_mbytes_per_sec": 0, 00:34:25.836 "r_mbytes_per_sec": 0, 00:34:25.836 "w_mbytes_per_sec": 0 00:34:25.836 }, 00:34:25.836 "claimed": true, 00:34:25.836 "claim_type": "exclusive_write", 00:34:25.836 "zoned": false, 00:34:25.836 "supported_io_types": { 00:34:25.836 "read": true, 00:34:25.836 "write": true, 00:34:25.836 "unmap": true, 00:34:25.836 "flush": true, 00:34:25.836 "reset": true, 00:34:25.836 "nvme_admin": false, 00:34:25.836 "nvme_io": false, 00:34:25.836 "nvme_io_md": false, 00:34:25.836 "write_zeroes": true, 00:34:25.836 "zcopy": true, 00:34:25.836 "get_zone_info": false, 00:34:25.836 "zone_management": false, 00:34:25.836 "zone_append": false, 00:34:25.836 "compare": false, 00:34:25.836 "compare_and_write": false, 00:34:25.836 "abort": true, 00:34:25.836 "seek_hole": false, 00:34:25.836 "seek_data": false, 00:34:25.836 "copy": true, 00:34:25.836 "nvme_iov_md": false 00:34:25.836 }, 00:34:25.836 "memory_domains": [ 00:34:25.836 { 00:34:25.836 "dma_device_id": "system", 00:34:25.836 "dma_device_type": 1 00:34:25.836 }, 00:34:25.836 { 00:34:25.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:25.836 "dma_device_type": 2 00:34:25.836 } 00:34:25.836 ], 00:34:25.836 "driver_specific": {} 00:34:25.836 }' 00:34:25.836 09:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:25.836 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:26.094 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:26.094 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:26.095 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:26.095 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:26.095 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:26.095 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:26.353 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:26.353 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:26.353 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:26.353 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:26.353 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:26.353 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:26.353 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:26.612 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:26.612 "name": "BaseBdev2", 00:34:26.612 "aliases": [ 00:34:26.612 "58a940e1-be9b-4cde-b79b-8b48aeb17b20" 00:34:26.612 ], 00:34:26.612 "product_name": "Malloc disk", 00:34:26.612 "block_size": 512, 00:34:26.612 "num_blocks": 65536, 00:34:26.612 "uuid": "58a940e1-be9b-4cde-b79b-8b48aeb17b20", 00:34:26.612 "assigned_rate_limits": { 00:34:26.612 "rw_ios_per_sec": 0, 00:34:26.612 "rw_mbytes_per_sec": 0, 00:34:26.612 "r_mbytes_per_sec": 0, 00:34:26.612 "w_mbytes_per_sec": 0 00:34:26.612 }, 00:34:26.612 "claimed": true, 00:34:26.612 "claim_type": "exclusive_write", 00:34:26.612 "zoned": false, 00:34:26.612 "supported_io_types": { 00:34:26.612 "read": true, 00:34:26.612 "write": true, 00:34:26.612 "unmap": true, 00:34:26.612 "flush": true, 00:34:26.612 "reset": true, 00:34:26.612 "nvme_admin": false, 00:34:26.612 "nvme_io": false, 00:34:26.612 "nvme_io_md": false, 00:34:26.612 "write_zeroes": true, 00:34:26.612 "zcopy": true, 00:34:26.612 "get_zone_info": false, 00:34:26.612 "zone_management": false, 00:34:26.612 "zone_append": false, 00:34:26.612 "compare": false, 00:34:26.612 "compare_and_write": false, 00:34:26.612 "abort": true, 00:34:26.612 "seek_hole": false, 00:34:26.612 "seek_data": false, 00:34:26.612 "copy": true, 00:34:26.612 "nvme_iov_md": false 00:34:26.612 }, 00:34:26.612 "memory_domains": [ 00:34:26.612 { 00:34:26.612 "dma_device_id": "system", 00:34:26.612 "dma_device_type": 1 00:34:26.612 }, 00:34:26.612 { 00:34:26.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:26.612 "dma_device_type": 2 00:34:26.612 } 00:34:26.612 ], 00:34:26.612 "driver_specific": {} 00:34:26.612 }' 00:34:26.612 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:26.612 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:26.871 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:26.871 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:26.871 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:26.871 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:26.871 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:26.871 09:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:26.871 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:26.871 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:27.130 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:27.130 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:27.130 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:27.130 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:27.130 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:27.392 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:27.392 "name": "BaseBdev3", 00:34:27.392 "aliases": [ 00:34:27.392 "0c7c1199-b717-48bf-b587-40a04bb8f493" 00:34:27.392 ], 00:34:27.392 "product_name": "Malloc disk", 00:34:27.392 "block_size": 512, 00:34:27.392 "num_blocks": 65536, 00:34:27.392 "uuid": "0c7c1199-b717-48bf-b587-40a04bb8f493", 00:34:27.392 "assigned_rate_limits": { 00:34:27.392 "rw_ios_per_sec": 0, 00:34:27.392 "rw_mbytes_per_sec": 0, 00:34:27.392 "r_mbytes_per_sec": 0, 00:34:27.392 "w_mbytes_per_sec": 0 00:34:27.392 }, 00:34:27.392 "claimed": true, 00:34:27.392 "claim_type": "exclusive_write", 00:34:27.392 "zoned": false, 00:34:27.392 "supported_io_types": { 00:34:27.392 "read": true, 00:34:27.392 "write": true, 00:34:27.392 "unmap": true, 00:34:27.392 "flush": true, 00:34:27.392 "reset": true, 00:34:27.392 "nvme_admin": false, 00:34:27.392 "nvme_io": false, 00:34:27.392 "nvme_io_md": false, 00:34:27.392 "write_zeroes": true, 00:34:27.392 "zcopy": true, 00:34:27.392 "get_zone_info": false, 00:34:27.392 "zone_management": false, 00:34:27.392 "zone_append": false, 00:34:27.392 "compare": false, 00:34:27.392 "compare_and_write": false, 00:34:27.392 "abort": true, 00:34:27.392 "seek_hole": false, 00:34:27.392 "seek_data": false, 00:34:27.392 "copy": true, 00:34:27.392 "nvme_iov_md": false 00:34:27.392 }, 00:34:27.392 "memory_domains": [ 00:34:27.392 { 00:34:27.392 "dma_device_id": "system", 00:34:27.392 "dma_device_type": 1 00:34:27.392 }, 00:34:27.392 { 00:34:27.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:27.392 "dma_device_type": 2 00:34:27.392 } 00:34:27.392 ], 00:34:27.392 "driver_specific": {} 00:34:27.392 }' 00:34:27.392 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:27.392 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:27.392 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:27.392 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:27.392 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:27.651 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:27.651 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:27.651 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:27.651 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:27.651 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:27.651 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:27.911 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:27.911 09:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:27.911 [2024-07-12 09:01:03.048357] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.174 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.432 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:28.432 "name": "Existed_Raid", 00:34:28.432 "uuid": "1c0fd4d5-236d-427b-b031-c00e4ab4dc61", 00:34:28.432 "strip_size_kb": 64, 00:34:28.432 "state": "online", 00:34:28.432 "raid_level": "raid5f", 00:34:28.432 "superblock": true, 00:34:28.432 "num_base_bdevs": 3, 00:34:28.432 "num_base_bdevs_discovered": 2, 00:34:28.432 "num_base_bdevs_operational": 2, 00:34:28.432 "base_bdevs_list": [ 00:34:28.432 { 00:34:28.432 "name": null, 00:34:28.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.432 "is_configured": false, 00:34:28.432 "data_offset": 2048, 00:34:28.432 "data_size": 63488 00:34:28.432 }, 00:34:28.432 { 00:34:28.432 "name": "BaseBdev2", 00:34:28.432 "uuid": "58a940e1-be9b-4cde-b79b-8b48aeb17b20", 00:34:28.432 "is_configured": true, 00:34:28.432 "data_offset": 2048, 00:34:28.432 "data_size": 63488 00:34:28.432 }, 00:34:28.432 { 00:34:28.432 "name": "BaseBdev3", 00:34:28.432 "uuid": "0c7c1199-b717-48bf-b587-40a04bb8f493", 00:34:28.432 "is_configured": true, 00:34:28.432 "data_offset": 2048, 00:34:28.432 "data_size": 63488 00:34:28.432 } 00:34:28.432 ] 00:34:28.432 }' 00:34:28.432 09:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:28.432 09:01:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.998 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:28.998 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:28.998 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.998 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:29.255 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:29.255 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:29.255 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:29.512 [2024-07-12 09:01:04.517324] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:29.512 [2024-07-12 09:01:04.517523] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:29.512 [2024-07-12 09:01:04.597059] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:29.512 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:29.512 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:29.512 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.512 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:29.769 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:29.769 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:29.769 09:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:30.027 [2024-07-12 09:01:05.113256] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:30.027 [2024-07-12 09:01:05.113356] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:34:30.027 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:30.027 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:30.027 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.027 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:30.284 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:30.284 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:30.284 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:34:30.284 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:34:30.284 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:30.284 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:30.542 BaseBdev2 00:34:30.542 09:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:34:30.542 09:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:30.542 09:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:30.542 09:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:30.542 09:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:30.542 09:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:30.542 09:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:30.800 09:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:31.057 [ 00:34:31.057 { 00:34:31.057 "name": "BaseBdev2", 00:34:31.057 "aliases": [ 00:34:31.057 "eed096d5-78f2-4133-8774-777fdbf30d28" 00:34:31.057 ], 00:34:31.057 "product_name": "Malloc disk", 00:34:31.057 "block_size": 512, 00:34:31.057 "num_blocks": 65536, 00:34:31.057 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:31.057 "assigned_rate_limits": { 00:34:31.057 "rw_ios_per_sec": 0, 00:34:31.057 "rw_mbytes_per_sec": 0, 00:34:31.057 "r_mbytes_per_sec": 0, 00:34:31.057 "w_mbytes_per_sec": 0 00:34:31.057 }, 00:34:31.057 "claimed": false, 00:34:31.057 "zoned": false, 00:34:31.057 "supported_io_types": { 00:34:31.057 "read": true, 00:34:31.057 "write": true, 00:34:31.057 "unmap": true, 00:34:31.057 "flush": true, 00:34:31.057 "reset": true, 00:34:31.057 "nvme_admin": false, 00:34:31.057 "nvme_io": false, 00:34:31.057 "nvme_io_md": false, 00:34:31.057 "write_zeroes": true, 00:34:31.057 "zcopy": true, 00:34:31.057 "get_zone_info": false, 00:34:31.057 "zone_management": false, 00:34:31.057 "zone_append": false, 00:34:31.057 "compare": false, 00:34:31.057 "compare_and_write": false, 00:34:31.057 "abort": true, 00:34:31.057 "seek_hole": false, 00:34:31.057 "seek_data": false, 00:34:31.057 "copy": true, 00:34:31.057 "nvme_iov_md": false 00:34:31.057 }, 00:34:31.057 "memory_domains": [ 00:34:31.057 { 00:34:31.057 "dma_device_id": "system", 00:34:31.057 "dma_device_type": 1 00:34:31.057 }, 00:34:31.057 { 00:34:31.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:31.057 "dma_device_type": 2 00:34:31.057 } 00:34:31.057 ], 00:34:31.057 "driver_specific": {} 00:34:31.057 } 00:34:31.057 ] 00:34:31.057 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:31.057 09:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:31.057 09:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:31.057 09:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:31.314 BaseBdev3 00:34:31.314 09:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:34:31.314 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:31.314 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:31.314 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:31.314 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:31.314 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:31.314 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:31.573 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:31.831 [ 00:34:31.831 { 00:34:31.831 "name": "BaseBdev3", 00:34:31.831 "aliases": [ 00:34:31.831 "e3f47495-6fb6-4cae-bf8e-0c01e534bc67" 00:34:31.832 ], 00:34:31.832 "product_name": "Malloc disk", 00:34:31.832 "block_size": 512, 00:34:31.832 "num_blocks": 65536, 00:34:31.832 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:31.832 "assigned_rate_limits": { 00:34:31.832 "rw_ios_per_sec": 0, 00:34:31.832 "rw_mbytes_per_sec": 0, 00:34:31.832 "r_mbytes_per_sec": 0, 00:34:31.832 "w_mbytes_per_sec": 0 00:34:31.832 }, 00:34:31.832 "claimed": false, 00:34:31.832 "zoned": false, 00:34:31.832 "supported_io_types": { 00:34:31.832 "read": true, 00:34:31.832 "write": true, 00:34:31.832 "unmap": true, 00:34:31.832 "flush": true, 00:34:31.832 "reset": true, 00:34:31.832 "nvme_admin": false, 00:34:31.832 "nvme_io": false, 00:34:31.832 "nvme_io_md": false, 00:34:31.832 "write_zeroes": true, 00:34:31.832 "zcopy": true, 00:34:31.832 "get_zone_info": false, 00:34:31.832 "zone_management": false, 00:34:31.832 "zone_append": false, 00:34:31.832 "compare": false, 00:34:31.832 "compare_and_write": false, 00:34:31.832 "abort": true, 00:34:31.832 "seek_hole": false, 00:34:31.832 "seek_data": false, 00:34:31.832 "copy": true, 00:34:31.832 "nvme_iov_md": false 00:34:31.832 }, 00:34:31.832 "memory_domains": [ 00:34:31.832 { 00:34:31.832 "dma_device_id": "system", 00:34:31.832 "dma_device_type": 1 00:34:31.832 }, 00:34:31.832 { 00:34:31.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:31.832 "dma_device_type": 2 00:34:31.832 } 00:34:31.832 ], 00:34:31.832 "driver_specific": {} 00:34:31.832 } 00:34:31.832 ] 00:34:31.832 09:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:31.832 09:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:31.832 09:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:31.832 09:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:34:32.090 [2024-07-12 09:01:07.094208] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:32.090 [2024-07-12 09:01:07.094312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:32.090 [2024-07-12 09:01:07.094380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:32.090 [2024-07-12 09:01:07.096532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.090 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.348 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:32.348 "name": "Existed_Raid", 00:34:32.348 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:32.348 "strip_size_kb": 64, 00:34:32.348 "state": "configuring", 00:34:32.348 "raid_level": "raid5f", 00:34:32.348 "superblock": true, 00:34:32.348 "num_base_bdevs": 3, 00:34:32.348 "num_base_bdevs_discovered": 2, 00:34:32.348 "num_base_bdevs_operational": 3, 00:34:32.348 "base_bdevs_list": [ 00:34:32.348 { 00:34:32.348 "name": "BaseBdev1", 00:34:32.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.348 "is_configured": false, 00:34:32.348 "data_offset": 0, 00:34:32.348 "data_size": 0 00:34:32.348 }, 00:34:32.348 { 00:34:32.348 "name": "BaseBdev2", 00:34:32.348 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:32.348 "is_configured": true, 00:34:32.348 "data_offset": 2048, 00:34:32.348 "data_size": 63488 00:34:32.348 }, 00:34:32.348 { 00:34:32.348 "name": "BaseBdev3", 00:34:32.348 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:32.348 "is_configured": true, 00:34:32.348 "data_offset": 2048, 00:34:32.348 "data_size": 63488 00:34:32.348 } 00:34:32.348 ] 00:34:32.348 }' 00:34:32.348 09:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:32.348 09:01:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.914 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:34:33.173 [2024-07-12 09:01:08.258448] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:33.173 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.431 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:33.431 "name": "Existed_Raid", 00:34:33.431 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:33.431 "strip_size_kb": 64, 00:34:33.431 "state": "configuring", 00:34:33.431 "raid_level": "raid5f", 00:34:33.431 "superblock": true, 00:34:33.431 "num_base_bdevs": 3, 00:34:33.431 "num_base_bdevs_discovered": 1, 00:34:33.431 "num_base_bdevs_operational": 3, 00:34:33.431 "base_bdevs_list": [ 00:34:33.431 { 00:34:33.431 "name": "BaseBdev1", 00:34:33.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:33.431 "is_configured": false, 00:34:33.431 "data_offset": 0, 00:34:33.431 "data_size": 0 00:34:33.431 }, 00:34:33.431 { 00:34:33.431 "name": null, 00:34:33.431 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:33.431 "is_configured": false, 00:34:33.431 "data_offset": 2048, 00:34:33.431 "data_size": 63488 00:34:33.431 }, 00:34:33.431 { 00:34:33.431 "name": "BaseBdev3", 00:34:33.431 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:33.431 "is_configured": true, 00:34:33.431 "data_offset": 2048, 00:34:33.431 "data_size": 63488 00:34:33.431 } 00:34:33.431 ] 00:34:33.431 }' 00:34:33.431 09:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:33.431 09:01:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.997 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.998 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:34.256 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:34:34.256 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:34.519 [2024-07-12 09:01:09.549897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:34.519 BaseBdev1 00:34:34.519 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:34:34.519 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:34.519 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:34.519 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:34.519 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:34.519 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:34.519 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:34.776 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:35.034 [ 00:34:35.034 { 00:34:35.034 "name": "BaseBdev1", 00:34:35.034 "aliases": [ 00:34:35.034 "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0" 00:34:35.034 ], 00:34:35.034 "product_name": "Malloc disk", 00:34:35.034 "block_size": 512, 00:34:35.034 "num_blocks": 65536, 00:34:35.034 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:35.034 "assigned_rate_limits": { 00:34:35.034 "rw_ios_per_sec": 0, 00:34:35.034 "rw_mbytes_per_sec": 0, 00:34:35.034 "r_mbytes_per_sec": 0, 00:34:35.034 "w_mbytes_per_sec": 0 00:34:35.034 }, 00:34:35.034 "claimed": true, 00:34:35.034 "claim_type": "exclusive_write", 00:34:35.034 "zoned": false, 00:34:35.034 "supported_io_types": { 00:34:35.034 "read": true, 00:34:35.034 "write": true, 00:34:35.034 "unmap": true, 00:34:35.034 "flush": true, 00:34:35.034 "reset": true, 00:34:35.034 "nvme_admin": false, 00:34:35.034 "nvme_io": false, 00:34:35.034 "nvme_io_md": false, 00:34:35.034 "write_zeroes": true, 00:34:35.034 "zcopy": true, 00:34:35.034 "get_zone_info": false, 00:34:35.034 "zone_management": false, 00:34:35.034 "zone_append": false, 00:34:35.034 "compare": false, 00:34:35.034 "compare_and_write": false, 00:34:35.034 "abort": true, 00:34:35.034 "seek_hole": false, 00:34:35.034 "seek_data": false, 00:34:35.034 "copy": true, 00:34:35.034 "nvme_iov_md": false 00:34:35.034 }, 00:34:35.034 "memory_domains": [ 00:34:35.034 { 00:34:35.034 "dma_device_id": "system", 00:34:35.034 "dma_device_type": 1 00:34:35.034 }, 00:34:35.034 { 00:34:35.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.034 "dma_device_type": 2 00:34:35.034 } 00:34:35.034 ], 00:34:35.034 "driver_specific": {} 00:34:35.034 } 00:34:35.034 ] 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.034 09:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:35.034 09:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:35.034 "name": "Existed_Raid", 00:34:35.034 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:35.034 "strip_size_kb": 64, 00:34:35.034 "state": "configuring", 00:34:35.034 "raid_level": "raid5f", 00:34:35.034 "superblock": true, 00:34:35.034 "num_base_bdevs": 3, 00:34:35.034 "num_base_bdevs_discovered": 2, 00:34:35.034 "num_base_bdevs_operational": 3, 00:34:35.034 "base_bdevs_list": [ 00:34:35.034 { 00:34:35.034 "name": "BaseBdev1", 00:34:35.034 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:35.034 "is_configured": true, 00:34:35.034 "data_offset": 2048, 00:34:35.034 "data_size": 63488 00:34:35.034 }, 00:34:35.034 { 00:34:35.034 "name": null, 00:34:35.034 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:35.034 "is_configured": false, 00:34:35.034 "data_offset": 2048, 00:34:35.034 "data_size": 63488 00:34:35.034 }, 00:34:35.034 { 00:34:35.034 "name": "BaseBdev3", 00:34:35.034 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:35.034 "is_configured": true, 00:34:35.034 "data_offset": 2048, 00:34:35.034 "data_size": 63488 00:34:35.034 } 00:34:35.034 ] 00:34:35.034 }' 00:34:35.034 09:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:35.034 09:01:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.970 09:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.970 09:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:35.970 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:34:35.970 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:34:36.241 [2024-07-12 09:01:11.188535] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:36.241 "name": "Existed_Raid", 00:34:36.241 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:36.241 "strip_size_kb": 64, 00:34:36.241 "state": "configuring", 00:34:36.241 "raid_level": "raid5f", 00:34:36.241 "superblock": true, 00:34:36.241 "num_base_bdevs": 3, 00:34:36.241 "num_base_bdevs_discovered": 1, 00:34:36.241 "num_base_bdevs_operational": 3, 00:34:36.241 "base_bdevs_list": [ 00:34:36.241 { 00:34:36.241 "name": "BaseBdev1", 00:34:36.241 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:36.241 "is_configured": true, 00:34:36.241 "data_offset": 2048, 00:34:36.241 "data_size": 63488 00:34:36.241 }, 00:34:36.241 { 00:34:36.241 "name": null, 00:34:36.241 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:36.241 "is_configured": false, 00:34:36.241 "data_offset": 2048, 00:34:36.241 "data_size": 63488 00:34:36.241 }, 00:34:36.241 { 00:34:36.241 "name": null, 00:34:36.241 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:36.241 "is_configured": false, 00:34:36.241 "data_offset": 2048, 00:34:36.241 "data_size": 63488 00:34:36.241 } 00:34:36.241 ] 00:34:36.241 }' 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:36.241 09:01:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:37.191 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.191 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:37.191 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:34:37.191 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:37.450 [2024-07-12 09:01:12.572876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.450 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:37.709 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:37.709 "name": "Existed_Raid", 00:34:37.709 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:37.709 "strip_size_kb": 64, 00:34:37.709 "state": "configuring", 00:34:37.709 "raid_level": "raid5f", 00:34:37.709 "superblock": true, 00:34:37.709 "num_base_bdevs": 3, 00:34:37.709 "num_base_bdevs_discovered": 2, 00:34:37.709 "num_base_bdevs_operational": 3, 00:34:37.709 "base_bdevs_list": [ 00:34:37.709 { 00:34:37.709 "name": "BaseBdev1", 00:34:37.709 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:37.709 "is_configured": true, 00:34:37.709 "data_offset": 2048, 00:34:37.709 "data_size": 63488 00:34:37.709 }, 00:34:37.709 { 00:34:37.709 "name": null, 00:34:37.709 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:37.709 "is_configured": false, 00:34:37.709 "data_offset": 2048, 00:34:37.709 "data_size": 63488 00:34:37.709 }, 00:34:37.709 { 00:34:37.709 "name": "BaseBdev3", 00:34:37.709 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:37.709 "is_configured": true, 00:34:37.709 "data_offset": 2048, 00:34:37.709 "data_size": 63488 00:34:37.709 } 00:34:37.709 ] 00:34:37.709 }' 00:34:37.709 09:01:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:37.709 09:01:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:38.643 09:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.643 09:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:38.643 09:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:34:38.643 09:01:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:38.902 [2024-07-12 09:01:13.936375] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.902 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:39.160 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:39.160 "name": "Existed_Raid", 00:34:39.160 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:39.160 "strip_size_kb": 64, 00:34:39.160 "state": "configuring", 00:34:39.160 "raid_level": "raid5f", 00:34:39.160 "superblock": true, 00:34:39.160 "num_base_bdevs": 3, 00:34:39.160 "num_base_bdevs_discovered": 1, 00:34:39.160 "num_base_bdevs_operational": 3, 00:34:39.160 "base_bdevs_list": [ 00:34:39.160 { 00:34:39.160 "name": null, 00:34:39.160 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:39.160 "is_configured": false, 00:34:39.160 "data_offset": 2048, 00:34:39.160 "data_size": 63488 00:34:39.160 }, 00:34:39.160 { 00:34:39.160 "name": null, 00:34:39.160 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:39.160 "is_configured": false, 00:34:39.160 "data_offset": 2048, 00:34:39.161 "data_size": 63488 00:34:39.161 }, 00:34:39.161 { 00:34:39.161 "name": "BaseBdev3", 00:34:39.161 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:39.161 "is_configured": true, 00:34:39.161 "data_offset": 2048, 00:34:39.161 "data_size": 63488 00:34:39.161 } 00:34:39.161 ] 00:34:39.161 }' 00:34:39.161 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:39.161 09:01:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:39.726 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.726 09:01:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:40.292 [2024-07-12 09:01:15.423492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.292 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:40.550 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:40.550 "name": "Existed_Raid", 00:34:40.550 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:40.550 "strip_size_kb": 64, 00:34:40.550 "state": "configuring", 00:34:40.550 "raid_level": "raid5f", 00:34:40.550 "superblock": true, 00:34:40.550 "num_base_bdevs": 3, 00:34:40.550 "num_base_bdevs_discovered": 2, 00:34:40.550 "num_base_bdevs_operational": 3, 00:34:40.550 "base_bdevs_list": [ 00:34:40.550 { 00:34:40.550 "name": null, 00:34:40.550 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:40.550 "is_configured": false, 00:34:40.550 "data_offset": 2048, 00:34:40.550 "data_size": 63488 00:34:40.550 }, 00:34:40.550 { 00:34:40.550 "name": "BaseBdev2", 00:34:40.550 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:40.550 "is_configured": true, 00:34:40.550 "data_offset": 2048, 00:34:40.550 "data_size": 63488 00:34:40.550 }, 00:34:40.550 { 00:34:40.550 "name": "BaseBdev3", 00:34:40.550 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:40.550 "is_configured": true, 00:34:40.550 "data_offset": 2048, 00:34:40.550 "data_size": 63488 00:34:40.550 } 00:34:40.550 ] 00:34:40.550 }' 00:34:40.550 09:01:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:40.550 09:01:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:41.496 09:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.496 09:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:41.496 09:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:34:41.496 09:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.496 09:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:41.754 09:01:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 12952b4d-7ab9-48f8-90cb-8e710cbb2ec0 00:34:42.012 [2024-07-12 09:01:17.035817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:42.012 [2024-07-12 09:01:17.036080] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:34:42.012 [2024-07-12 09:01:17.036111] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:42.012 [2024-07-12 09:01:17.036229] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:34:42.012 NewBaseBdev 00:34:42.012 [2024-07-12 09:01:17.040907] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:34:42.012 [2024-07-12 09:01:17.040931] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:34:42.012 [2024-07-12 09:01:17.041092] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:42.012 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:34:42.012 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:34:42.012 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:42.012 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:42.012 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:42.012 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:42.012 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:42.270 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:42.529 [ 00:34:42.529 { 00:34:42.529 "name": "NewBaseBdev", 00:34:42.529 "aliases": [ 00:34:42.529 "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0" 00:34:42.529 ], 00:34:42.529 "product_name": "Malloc disk", 00:34:42.529 "block_size": 512, 00:34:42.529 "num_blocks": 65536, 00:34:42.529 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:42.529 "assigned_rate_limits": { 00:34:42.529 "rw_ios_per_sec": 0, 00:34:42.529 "rw_mbytes_per_sec": 0, 00:34:42.529 "r_mbytes_per_sec": 0, 00:34:42.529 "w_mbytes_per_sec": 0 00:34:42.529 }, 00:34:42.529 "claimed": true, 00:34:42.529 "claim_type": "exclusive_write", 00:34:42.529 "zoned": false, 00:34:42.529 "supported_io_types": { 00:34:42.529 "read": true, 00:34:42.529 "write": true, 00:34:42.529 "unmap": true, 00:34:42.529 "flush": true, 00:34:42.529 "reset": true, 00:34:42.529 "nvme_admin": false, 00:34:42.529 "nvme_io": false, 00:34:42.529 "nvme_io_md": false, 00:34:42.529 "write_zeroes": true, 00:34:42.529 "zcopy": true, 00:34:42.529 "get_zone_info": false, 00:34:42.529 "zone_management": false, 00:34:42.529 "zone_append": false, 00:34:42.529 "compare": false, 00:34:42.529 "compare_and_write": false, 00:34:42.529 "abort": true, 00:34:42.529 "seek_hole": false, 00:34:42.529 "seek_data": false, 00:34:42.529 "copy": true, 00:34:42.529 "nvme_iov_md": false 00:34:42.529 }, 00:34:42.529 "memory_domains": [ 00:34:42.529 { 00:34:42.529 "dma_device_id": "system", 00:34:42.529 "dma_device_type": 1 00:34:42.529 }, 00:34:42.529 { 00:34:42.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:42.529 "dma_device_type": 2 00:34:42.529 } 00:34:42.529 ], 00:34:42.529 "driver_specific": {} 00:34:42.529 } 00:34:42.529 ] 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.529 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:42.787 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:42.787 "name": "Existed_Raid", 00:34:42.787 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:42.787 "strip_size_kb": 64, 00:34:42.787 "state": "online", 00:34:42.787 "raid_level": "raid5f", 00:34:42.787 "superblock": true, 00:34:42.787 "num_base_bdevs": 3, 00:34:42.787 "num_base_bdevs_discovered": 3, 00:34:42.787 "num_base_bdevs_operational": 3, 00:34:42.787 "base_bdevs_list": [ 00:34:42.787 { 00:34:42.787 "name": "NewBaseBdev", 00:34:42.787 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:42.787 "is_configured": true, 00:34:42.787 "data_offset": 2048, 00:34:42.787 "data_size": 63488 00:34:42.787 }, 00:34:42.787 { 00:34:42.787 "name": "BaseBdev2", 00:34:42.787 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:42.787 "is_configured": true, 00:34:42.787 "data_offset": 2048, 00:34:42.787 "data_size": 63488 00:34:42.787 }, 00:34:42.787 { 00:34:42.787 "name": "BaseBdev3", 00:34:42.787 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:42.787 "is_configured": true, 00:34:42.787 "data_offset": 2048, 00:34:42.787 "data_size": 63488 00:34:42.787 } 00:34:42.787 ] 00:34:42.787 }' 00:34:42.787 09:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:42.787 09:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:43.355 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:34:43.356 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:43.356 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:43.356 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:43.356 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:43.356 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:34:43.356 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:43.356 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:43.613 [2024-07-12 09:01:18.714528] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:43.613 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:43.613 "name": "Existed_Raid", 00:34:43.613 "aliases": [ 00:34:43.613 "db945815-948a-4a55-8746-570405d218dd" 00:34:43.613 ], 00:34:43.613 "product_name": "Raid Volume", 00:34:43.613 "block_size": 512, 00:34:43.613 "num_blocks": 126976, 00:34:43.613 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:43.613 "assigned_rate_limits": { 00:34:43.613 "rw_ios_per_sec": 0, 00:34:43.613 "rw_mbytes_per_sec": 0, 00:34:43.613 "r_mbytes_per_sec": 0, 00:34:43.613 "w_mbytes_per_sec": 0 00:34:43.613 }, 00:34:43.613 "claimed": false, 00:34:43.613 "zoned": false, 00:34:43.613 "supported_io_types": { 00:34:43.613 "read": true, 00:34:43.613 "write": true, 00:34:43.613 "unmap": false, 00:34:43.613 "flush": false, 00:34:43.613 "reset": true, 00:34:43.613 "nvme_admin": false, 00:34:43.613 "nvme_io": false, 00:34:43.613 "nvme_io_md": false, 00:34:43.613 "write_zeroes": true, 00:34:43.613 "zcopy": false, 00:34:43.613 "get_zone_info": false, 00:34:43.613 "zone_management": false, 00:34:43.613 "zone_append": false, 00:34:43.613 "compare": false, 00:34:43.613 "compare_and_write": false, 00:34:43.613 "abort": false, 00:34:43.613 "seek_hole": false, 00:34:43.613 "seek_data": false, 00:34:43.613 "copy": false, 00:34:43.613 "nvme_iov_md": false 00:34:43.613 }, 00:34:43.613 "driver_specific": { 00:34:43.613 "raid": { 00:34:43.613 "uuid": "db945815-948a-4a55-8746-570405d218dd", 00:34:43.613 "strip_size_kb": 64, 00:34:43.613 "state": "online", 00:34:43.613 "raid_level": "raid5f", 00:34:43.613 "superblock": true, 00:34:43.613 "num_base_bdevs": 3, 00:34:43.613 "num_base_bdevs_discovered": 3, 00:34:43.613 "num_base_bdevs_operational": 3, 00:34:43.613 "base_bdevs_list": [ 00:34:43.613 { 00:34:43.613 "name": "NewBaseBdev", 00:34:43.613 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:43.613 "is_configured": true, 00:34:43.613 "data_offset": 2048, 00:34:43.613 "data_size": 63488 00:34:43.613 }, 00:34:43.613 { 00:34:43.613 "name": "BaseBdev2", 00:34:43.613 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:43.613 "is_configured": true, 00:34:43.613 "data_offset": 2048, 00:34:43.613 "data_size": 63488 00:34:43.613 }, 00:34:43.613 { 00:34:43.613 "name": "BaseBdev3", 00:34:43.613 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:43.613 "is_configured": true, 00:34:43.613 "data_offset": 2048, 00:34:43.613 "data_size": 63488 00:34:43.613 } 00:34:43.613 ] 00:34:43.613 } 00:34:43.613 } 00:34:43.613 }' 00:34:43.613 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:43.613 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:43.613 BaseBdev2 00:34:43.614 BaseBdev3' 00:34:43.614 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:43.614 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:43.614 09:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:43.871 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:43.871 "name": "NewBaseBdev", 00:34:43.871 "aliases": [ 00:34:43.871 "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0" 00:34:43.871 ], 00:34:43.871 "product_name": "Malloc disk", 00:34:43.871 "block_size": 512, 00:34:43.871 "num_blocks": 65536, 00:34:43.871 "uuid": "12952b4d-7ab9-48f8-90cb-8e710cbb2ec0", 00:34:43.871 "assigned_rate_limits": { 00:34:43.871 "rw_ios_per_sec": 0, 00:34:43.871 "rw_mbytes_per_sec": 0, 00:34:43.871 "r_mbytes_per_sec": 0, 00:34:43.871 "w_mbytes_per_sec": 0 00:34:43.871 }, 00:34:43.871 "claimed": true, 00:34:43.871 "claim_type": "exclusive_write", 00:34:43.871 "zoned": false, 00:34:43.871 "supported_io_types": { 00:34:43.871 "read": true, 00:34:43.871 "write": true, 00:34:43.871 "unmap": true, 00:34:43.871 "flush": true, 00:34:43.871 "reset": true, 00:34:43.871 "nvme_admin": false, 00:34:43.871 "nvme_io": false, 00:34:43.871 "nvme_io_md": false, 00:34:43.871 "write_zeroes": true, 00:34:43.871 "zcopy": true, 00:34:43.871 "get_zone_info": false, 00:34:43.871 "zone_management": false, 00:34:43.871 "zone_append": false, 00:34:43.871 "compare": false, 00:34:43.871 "compare_and_write": false, 00:34:43.871 "abort": true, 00:34:43.871 "seek_hole": false, 00:34:43.871 "seek_data": false, 00:34:43.871 "copy": true, 00:34:43.871 "nvme_iov_md": false 00:34:43.871 }, 00:34:43.871 "memory_domains": [ 00:34:43.871 { 00:34:43.871 "dma_device_id": "system", 00:34:43.871 "dma_device_type": 1 00:34:43.871 }, 00:34:43.871 { 00:34:43.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:43.871 "dma_device_type": 2 00:34:43.871 } 00:34:43.871 ], 00:34:43.871 "driver_specific": {} 00:34:43.871 }' 00:34:43.871 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:44.128 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:44.128 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:44.128 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:44.128 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:44.128 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:44.129 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:44.386 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:44.644 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:44.644 "name": "BaseBdev2", 00:34:44.644 "aliases": [ 00:34:44.644 "eed096d5-78f2-4133-8774-777fdbf30d28" 00:34:44.644 ], 00:34:44.644 "product_name": "Malloc disk", 00:34:44.644 "block_size": 512, 00:34:44.644 "num_blocks": 65536, 00:34:44.644 "uuid": "eed096d5-78f2-4133-8774-777fdbf30d28", 00:34:44.644 "assigned_rate_limits": { 00:34:44.644 "rw_ios_per_sec": 0, 00:34:44.644 "rw_mbytes_per_sec": 0, 00:34:44.644 "r_mbytes_per_sec": 0, 00:34:44.644 "w_mbytes_per_sec": 0 00:34:44.644 }, 00:34:44.644 "claimed": true, 00:34:44.644 "claim_type": "exclusive_write", 00:34:44.644 "zoned": false, 00:34:44.644 "supported_io_types": { 00:34:44.644 "read": true, 00:34:44.644 "write": true, 00:34:44.644 "unmap": true, 00:34:44.644 "flush": true, 00:34:44.644 "reset": true, 00:34:44.644 "nvme_admin": false, 00:34:44.644 "nvme_io": false, 00:34:44.644 "nvme_io_md": false, 00:34:44.644 "write_zeroes": true, 00:34:44.644 "zcopy": true, 00:34:44.644 "get_zone_info": false, 00:34:44.644 "zone_management": false, 00:34:44.644 "zone_append": false, 00:34:44.644 "compare": false, 00:34:44.644 "compare_and_write": false, 00:34:44.644 "abort": true, 00:34:44.644 "seek_hole": false, 00:34:44.644 "seek_data": false, 00:34:44.644 "copy": true, 00:34:44.644 "nvme_iov_md": false 00:34:44.644 }, 00:34:44.644 "memory_domains": [ 00:34:44.644 { 00:34:44.644 "dma_device_id": "system", 00:34:44.644 "dma_device_type": 1 00:34:44.644 }, 00:34:44.644 { 00:34:44.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:44.644 "dma_device_type": 2 00:34:44.644 } 00:34:44.644 ], 00:34:44.644 "driver_specific": {} 00:34:44.644 }' 00:34:44.644 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:44.644 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:44.901 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:44.901 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:44.901 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:44.901 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:44.901 09:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:44.901 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:44.901 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:44.901 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:45.159 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:45.159 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:45.159 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:45.159 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:45.159 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:45.417 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:45.417 "name": "BaseBdev3", 00:34:45.417 "aliases": [ 00:34:45.417 "e3f47495-6fb6-4cae-bf8e-0c01e534bc67" 00:34:45.417 ], 00:34:45.417 "product_name": "Malloc disk", 00:34:45.417 "block_size": 512, 00:34:45.417 "num_blocks": 65536, 00:34:45.417 "uuid": "e3f47495-6fb6-4cae-bf8e-0c01e534bc67", 00:34:45.417 "assigned_rate_limits": { 00:34:45.417 "rw_ios_per_sec": 0, 00:34:45.417 "rw_mbytes_per_sec": 0, 00:34:45.417 "r_mbytes_per_sec": 0, 00:34:45.417 "w_mbytes_per_sec": 0 00:34:45.417 }, 00:34:45.417 "claimed": true, 00:34:45.417 "claim_type": "exclusive_write", 00:34:45.417 "zoned": false, 00:34:45.417 "supported_io_types": { 00:34:45.417 "read": true, 00:34:45.417 "write": true, 00:34:45.417 "unmap": true, 00:34:45.417 "flush": true, 00:34:45.417 "reset": true, 00:34:45.417 "nvme_admin": false, 00:34:45.417 "nvme_io": false, 00:34:45.417 "nvme_io_md": false, 00:34:45.417 "write_zeroes": true, 00:34:45.417 "zcopy": true, 00:34:45.417 "get_zone_info": false, 00:34:45.417 "zone_management": false, 00:34:45.417 "zone_append": false, 00:34:45.417 "compare": false, 00:34:45.417 "compare_and_write": false, 00:34:45.417 "abort": true, 00:34:45.417 "seek_hole": false, 00:34:45.417 "seek_data": false, 00:34:45.417 "copy": true, 00:34:45.417 "nvme_iov_md": false 00:34:45.417 }, 00:34:45.417 "memory_domains": [ 00:34:45.417 { 00:34:45.417 "dma_device_id": "system", 00:34:45.417 "dma_device_type": 1 00:34:45.417 }, 00:34:45.417 { 00:34:45.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:45.417 "dma_device_type": 2 00:34:45.417 } 00:34:45.417 ], 00:34:45.417 "driver_specific": {} 00:34:45.417 }' 00:34:45.417 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:45.417 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:45.417 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:45.417 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:45.417 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:45.674 09:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:45.932 [2024-07-12 09:01:21.039060] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:45.932 [2024-07-12 09:01:21.039102] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:45.932 [2024-07-12 09:01:21.039204] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:45.932 [2024-07-12 09:01:21.039555] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:45.932 [2024-07-12 09:01:21.039581] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 153586 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 153586 ']' 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 153586 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153586 00:34:45.932 killing process with pid 153586 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153586' 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 153586 00:34:45.932 09:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 153586 00:34:45.932 [2024-07-12 09:01:21.071340] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:46.201 [2024-07-12 09:01:21.274020] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:47.138 ************************************ 00:34:47.138 END TEST raid5f_state_function_test_sb 00:34:47.138 ************************************ 00:34:47.138 09:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:34:47.138 00:34:47.138 real 0m31.159s 00:34:47.138 user 0m58.514s 00:34:47.138 sys 0m3.517s 00:34:47.138 09:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:47.138 09:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.396 09:01:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:47.396 09:01:22 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:34:47.396 09:01:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:34:47.396 09:01:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.396 09:01:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:47.396 ************************************ 00:34:47.396 START TEST raid5f_superblock_test 00:34:47.396 ************************************ 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 3 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=154616 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 154616 /var/tmp/spdk-raid.sock 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 154616 ']' 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:47.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:47.396 09:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.396 [2024-07-12 09:01:22.427646] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:34:47.397 [2024-07-12 09:01:22.427816] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154616 ] 00:34:47.397 [2024-07-12 09:01:22.588637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.654 [2024-07-12 09:01:22.820025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.911 [2024-07-12 09:01:22.984981] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:34:48.476 malloc1 00:34:48.476 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:48.735 [2024-07-12 09:01:23.770536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:48.735 [2024-07-12 09:01:23.770622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.735 [2024-07-12 09:01:23.770666] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:34:48.735 [2024-07-12 09:01:23.770691] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.735 [2024-07-12 09:01:23.772873] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.735 [2024-07-12 09:01:23.772917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:48.735 pt1 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:48.735 09:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:34:48.993 malloc2 00:34:48.993 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:49.251 [2024-07-12 09:01:24.208743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:49.251 [2024-07-12 09:01:24.208840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.251 [2024-07-12 09:01:24.208874] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:34:49.251 [2024-07-12 09:01:24.208894] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.251 [2024-07-12 09:01:24.210666] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.251 [2024-07-12 09:01:24.210723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:49.251 pt2 00:34:49.251 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:49.251 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:34:49.252 malloc3 00:34:49.252 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:49.509 [2024-07-12 09:01:24.693861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:49.509 [2024-07-12 09:01:24.693943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.509 [2024-07-12 09:01:24.693975] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:34:49.509 [2024-07-12 09:01:24.694001] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.509 [2024-07-12 09:01:24.695766] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.509 [2024-07-12 09:01:24.695813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:49.509 pt3 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:34:49.767 [2024-07-12 09:01:24.881922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:49.767 [2024-07-12 09:01:24.883722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:49.767 [2024-07-12 09:01:24.883794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:49.767 [2024-07-12 09:01:24.883999] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:34:49.767 [2024-07-12 09:01:24.884022] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:49.767 [2024-07-12 09:01:24.884148] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:34:49.767 [2024-07-12 09:01:24.888393] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:34:49.767 [2024-07-12 09:01:24.888417] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:34:49.767 [2024-07-12 09:01:24.888569] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.767 09:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.025 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:50.025 "name": "raid_bdev1", 00:34:50.025 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:50.025 "strip_size_kb": 64, 00:34:50.025 "state": "online", 00:34:50.025 "raid_level": "raid5f", 00:34:50.025 "superblock": true, 00:34:50.025 "num_base_bdevs": 3, 00:34:50.025 "num_base_bdevs_discovered": 3, 00:34:50.025 "num_base_bdevs_operational": 3, 00:34:50.025 "base_bdevs_list": [ 00:34:50.025 { 00:34:50.025 "name": "pt1", 00:34:50.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.025 "is_configured": true, 00:34:50.025 "data_offset": 2048, 00:34:50.025 "data_size": 63488 00:34:50.025 }, 00:34:50.025 { 00:34:50.025 "name": "pt2", 00:34:50.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.025 "is_configured": true, 00:34:50.025 "data_offset": 2048, 00:34:50.025 "data_size": 63488 00:34:50.025 }, 00:34:50.025 { 00:34:50.025 "name": "pt3", 00:34:50.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:50.025 "is_configured": true, 00:34:50.025 "data_offset": 2048, 00:34:50.025 "data_size": 63488 00:34:50.025 } 00:34:50.025 ] 00:34:50.025 }' 00:34:50.025 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:50.025 09:01:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:50.960 09:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:50.960 [2024-07-12 09:01:26.045507] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.960 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:50.960 "name": "raid_bdev1", 00:34:50.960 "aliases": [ 00:34:50.960 "a22cf4a2-2b83-4c93-8186-6d97fe169fe8" 00:34:50.960 ], 00:34:50.960 "product_name": "Raid Volume", 00:34:50.960 "block_size": 512, 00:34:50.960 "num_blocks": 126976, 00:34:50.960 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:50.960 "assigned_rate_limits": { 00:34:50.960 "rw_ios_per_sec": 0, 00:34:50.960 "rw_mbytes_per_sec": 0, 00:34:50.960 "r_mbytes_per_sec": 0, 00:34:50.960 "w_mbytes_per_sec": 0 00:34:50.960 }, 00:34:50.960 "claimed": false, 00:34:50.960 "zoned": false, 00:34:50.960 "supported_io_types": { 00:34:50.960 "read": true, 00:34:50.960 "write": true, 00:34:50.960 "unmap": false, 00:34:50.960 "flush": false, 00:34:50.960 "reset": true, 00:34:50.960 "nvme_admin": false, 00:34:50.960 "nvme_io": false, 00:34:50.960 "nvme_io_md": false, 00:34:50.960 "write_zeroes": true, 00:34:50.960 "zcopy": false, 00:34:50.960 "get_zone_info": false, 00:34:50.960 "zone_management": false, 00:34:50.960 "zone_append": false, 00:34:50.960 "compare": false, 00:34:50.960 "compare_and_write": false, 00:34:50.960 "abort": false, 00:34:50.960 "seek_hole": false, 00:34:50.960 "seek_data": false, 00:34:50.960 "copy": false, 00:34:50.960 "nvme_iov_md": false 00:34:50.960 }, 00:34:50.960 "driver_specific": { 00:34:50.960 "raid": { 00:34:50.960 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:50.960 "strip_size_kb": 64, 00:34:50.960 "state": "online", 00:34:50.960 "raid_level": "raid5f", 00:34:50.960 "superblock": true, 00:34:50.960 "num_base_bdevs": 3, 00:34:50.960 "num_base_bdevs_discovered": 3, 00:34:50.960 "num_base_bdevs_operational": 3, 00:34:50.960 "base_bdevs_list": [ 00:34:50.960 { 00:34:50.960 "name": "pt1", 00:34:50.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.960 "is_configured": true, 00:34:50.960 "data_offset": 2048, 00:34:50.960 "data_size": 63488 00:34:50.960 }, 00:34:50.960 { 00:34:50.960 "name": "pt2", 00:34:50.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.960 "is_configured": true, 00:34:50.960 "data_offset": 2048, 00:34:50.960 "data_size": 63488 00:34:50.960 }, 00:34:50.960 { 00:34:50.960 "name": "pt3", 00:34:50.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:50.960 "is_configured": true, 00:34:50.960 "data_offset": 2048, 00:34:50.960 "data_size": 63488 00:34:50.960 } 00:34:50.960 ] 00:34:50.960 } 00:34:50.960 } 00:34:50.960 }' 00:34:50.960 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:50.960 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:50.960 pt2 00:34:50.960 pt3' 00:34:50.960 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:50.960 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:50.961 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.219 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.219 "name": "pt1", 00:34:51.219 "aliases": [ 00:34:51.219 "00000000-0000-0000-0000-000000000001" 00:34:51.219 ], 00:34:51.219 "product_name": "passthru", 00:34:51.219 "block_size": 512, 00:34:51.219 "num_blocks": 65536, 00:34:51.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:51.219 "assigned_rate_limits": { 00:34:51.219 "rw_ios_per_sec": 0, 00:34:51.219 "rw_mbytes_per_sec": 0, 00:34:51.219 "r_mbytes_per_sec": 0, 00:34:51.219 "w_mbytes_per_sec": 0 00:34:51.219 }, 00:34:51.219 "claimed": true, 00:34:51.219 "claim_type": "exclusive_write", 00:34:51.219 "zoned": false, 00:34:51.219 "supported_io_types": { 00:34:51.219 "read": true, 00:34:51.219 "write": true, 00:34:51.219 "unmap": true, 00:34:51.219 "flush": true, 00:34:51.219 "reset": true, 00:34:51.219 "nvme_admin": false, 00:34:51.219 "nvme_io": false, 00:34:51.219 "nvme_io_md": false, 00:34:51.219 "write_zeroes": true, 00:34:51.219 "zcopy": true, 00:34:51.219 "get_zone_info": false, 00:34:51.219 "zone_management": false, 00:34:51.219 "zone_append": false, 00:34:51.219 "compare": false, 00:34:51.219 "compare_and_write": false, 00:34:51.219 "abort": true, 00:34:51.219 "seek_hole": false, 00:34:51.219 "seek_data": false, 00:34:51.219 "copy": true, 00:34:51.219 "nvme_iov_md": false 00:34:51.219 }, 00:34:51.219 "memory_domains": [ 00:34:51.219 { 00:34:51.219 "dma_device_id": "system", 00:34:51.219 "dma_device_type": 1 00:34:51.219 }, 00:34:51.219 { 00:34:51.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.219 "dma_device_type": 2 00:34:51.219 } 00:34:51.219 ], 00:34:51.219 "driver_specific": { 00:34:51.219 "passthru": { 00:34:51.219 "name": "pt1", 00:34:51.219 "base_bdev_name": "malloc1" 00:34:51.219 } 00:34:51.219 } 00:34:51.219 }' 00:34:51.219 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.219 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.476 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:51.476 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.476 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.476 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:51.476 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.476 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.476 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:51.735 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.735 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.735 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:51.735 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:51.735 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:51.735 09:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.994 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.994 "name": "pt2", 00:34:51.994 "aliases": [ 00:34:51.994 "00000000-0000-0000-0000-000000000002" 00:34:51.994 ], 00:34:51.994 "product_name": "passthru", 00:34:51.994 "block_size": 512, 00:34:51.994 "num_blocks": 65536, 00:34:51.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:51.994 "assigned_rate_limits": { 00:34:51.994 "rw_ios_per_sec": 0, 00:34:51.994 "rw_mbytes_per_sec": 0, 00:34:51.994 "r_mbytes_per_sec": 0, 00:34:51.994 "w_mbytes_per_sec": 0 00:34:51.994 }, 00:34:51.994 "claimed": true, 00:34:51.994 "claim_type": "exclusive_write", 00:34:51.994 "zoned": false, 00:34:51.994 "supported_io_types": { 00:34:51.994 "read": true, 00:34:51.994 "write": true, 00:34:51.994 "unmap": true, 00:34:51.994 "flush": true, 00:34:51.994 "reset": true, 00:34:51.994 "nvme_admin": false, 00:34:51.994 "nvme_io": false, 00:34:51.994 "nvme_io_md": false, 00:34:51.994 "write_zeroes": true, 00:34:51.994 "zcopy": true, 00:34:51.994 "get_zone_info": false, 00:34:51.994 "zone_management": false, 00:34:51.994 "zone_append": false, 00:34:51.994 "compare": false, 00:34:51.994 "compare_and_write": false, 00:34:51.994 "abort": true, 00:34:51.994 "seek_hole": false, 00:34:51.994 "seek_data": false, 00:34:51.994 "copy": true, 00:34:51.994 "nvme_iov_md": false 00:34:51.994 }, 00:34:51.994 "memory_domains": [ 00:34:51.994 { 00:34:51.994 "dma_device_id": "system", 00:34:51.994 "dma_device_type": 1 00:34:51.994 }, 00:34:51.994 { 00:34:51.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.994 "dma_device_type": 2 00:34:51.994 } 00:34:51.994 ], 00:34:51.994 "driver_specific": { 00:34:51.994 "passthru": { 00:34:51.994 "name": "pt2", 00:34:51.994 "base_bdev_name": "malloc2" 00:34:51.994 } 00:34:51.994 } 00:34:51.994 }' 00:34:51.994 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.994 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.994 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:51.994 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.253 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.253 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:52.253 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.253 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.253 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:52.253 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.253 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.511 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:52.511 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:52.511 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:34:52.511 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:52.768 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:52.768 "name": "pt3", 00:34:52.768 "aliases": [ 00:34:52.768 "00000000-0000-0000-0000-000000000003" 00:34:52.768 ], 00:34:52.768 "product_name": "passthru", 00:34:52.768 "block_size": 512, 00:34:52.768 "num_blocks": 65536, 00:34:52.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:52.768 "assigned_rate_limits": { 00:34:52.768 "rw_ios_per_sec": 0, 00:34:52.768 "rw_mbytes_per_sec": 0, 00:34:52.768 "r_mbytes_per_sec": 0, 00:34:52.768 "w_mbytes_per_sec": 0 00:34:52.768 }, 00:34:52.768 "claimed": true, 00:34:52.768 "claim_type": "exclusive_write", 00:34:52.768 "zoned": false, 00:34:52.768 "supported_io_types": { 00:34:52.768 "read": true, 00:34:52.768 "write": true, 00:34:52.768 "unmap": true, 00:34:52.768 "flush": true, 00:34:52.768 "reset": true, 00:34:52.768 "nvme_admin": false, 00:34:52.768 "nvme_io": false, 00:34:52.768 "nvme_io_md": false, 00:34:52.768 "write_zeroes": true, 00:34:52.768 "zcopy": true, 00:34:52.768 "get_zone_info": false, 00:34:52.768 "zone_management": false, 00:34:52.768 "zone_append": false, 00:34:52.768 "compare": false, 00:34:52.768 "compare_and_write": false, 00:34:52.768 "abort": true, 00:34:52.768 "seek_hole": false, 00:34:52.768 "seek_data": false, 00:34:52.768 "copy": true, 00:34:52.768 "nvme_iov_md": false 00:34:52.768 }, 00:34:52.768 "memory_domains": [ 00:34:52.768 { 00:34:52.768 "dma_device_id": "system", 00:34:52.768 "dma_device_type": 1 00:34:52.768 }, 00:34:52.768 { 00:34:52.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.768 "dma_device_type": 2 00:34:52.768 } 00:34:52.768 ], 00:34:52.768 "driver_specific": { 00:34:52.768 "passthru": { 00:34:52.768 "name": "pt3", 00:34:52.768 "base_bdev_name": "malloc3" 00:34:52.768 } 00:34:52.768 } 00:34:52.768 }' 00:34:52.768 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.768 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.768 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:52.768 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.768 09:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:53.026 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:53.026 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:53.026 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:53.026 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:53.026 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:53.026 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:53.284 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:53.284 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:53.284 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:34:53.284 [2024-07-12 09:01:28.469975] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:53.543 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a22cf4a2-2b83-4c93-8186-6d97fe169fe8 00:34:53.543 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a22cf4a2-2b83-4c93-8186-6d97fe169fe8 ']' 00:34:53.543 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:53.543 [2024-07-12 09:01:28.737865] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:53.543 [2024-07-12 09:01:28.737890] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:53.543 [2024-07-12 09:01:28.737956] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:53.543 [2024-07-12 09:01:28.738033] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:53.543 [2024-07-12 09:01:28.738045] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:34:53.802 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.802 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:34:53.802 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:34:53.802 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:34:53.802 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:53.802 09:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:54.074 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:54.074 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:54.337 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:54.337 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:54.595 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:54.595 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:54.853 09:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:34:55.111 [2024-07-12 09:01:30.118069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:55.111 [2024-07-12 09:01:30.119950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:55.111 [2024-07-12 09:01:30.120018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:55.111 [2024-07-12 09:01:30.120072] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:55.111 [2024-07-12 09:01:30.120154] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:55.111 [2024-07-12 09:01:30.120223] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:55.111 [2024-07-12 09:01:30.120270] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:55.111 [2024-07-12 09:01:30.120281] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:34:55.111 request: 00:34:55.111 { 00:34:55.111 "name": "raid_bdev1", 00:34:55.111 "raid_level": "raid5f", 00:34:55.111 "base_bdevs": [ 00:34:55.111 "malloc1", 00:34:55.111 "malloc2", 00:34:55.111 "malloc3" 00:34:55.111 ], 00:34:55.111 "strip_size_kb": 64, 00:34:55.111 "superblock": false, 00:34:55.111 "method": "bdev_raid_create", 00:34:55.111 "req_id": 1 00:34:55.111 } 00:34:55.111 Got JSON-RPC error response 00:34:55.111 response: 00:34:55.111 { 00:34:55.111 "code": -17, 00:34:55.111 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:55.111 } 00:34:55.111 09:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:34:55.111 09:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:55.111 09:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:55.111 09:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:55.111 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.111 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:34:55.369 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:34:55.369 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:34:55.369 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:55.627 [2024-07-12 09:01:30.574088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:55.627 [2024-07-12 09:01:30.574145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:55.627 [2024-07-12 09:01:30.574178] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:34:55.627 [2024-07-12 09:01:30.574197] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:55.627 [2024-07-12 09:01:30.576399] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:55.627 [2024-07-12 09:01:30.576465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:55.627 [2024-07-12 09:01:30.576562] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:55.627 [2024-07-12 09:01:30.576621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:55.627 pt1 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:55.627 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.628 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.628 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:55.628 "name": "raid_bdev1", 00:34:55.628 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:55.628 "strip_size_kb": 64, 00:34:55.628 "state": "configuring", 00:34:55.628 "raid_level": "raid5f", 00:34:55.628 "superblock": true, 00:34:55.628 "num_base_bdevs": 3, 00:34:55.628 "num_base_bdevs_discovered": 1, 00:34:55.628 "num_base_bdevs_operational": 3, 00:34:55.628 "base_bdevs_list": [ 00:34:55.628 { 00:34:55.628 "name": "pt1", 00:34:55.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:55.628 "is_configured": true, 00:34:55.628 "data_offset": 2048, 00:34:55.628 "data_size": 63488 00:34:55.628 }, 00:34:55.628 { 00:34:55.628 "name": null, 00:34:55.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:55.628 "is_configured": false, 00:34:55.628 "data_offset": 2048, 00:34:55.628 "data_size": 63488 00:34:55.628 }, 00:34:55.628 { 00:34:55.628 "name": null, 00:34:55.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:55.628 "is_configured": false, 00:34:55.628 "data_offset": 2048, 00:34:55.628 "data_size": 63488 00:34:55.628 } 00:34:55.628 ] 00:34:55.628 }' 00:34:55.628 09:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:55.628 09:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.560 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:34:56.560 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:56.560 [2024-07-12 09:01:31.690297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:56.560 [2024-07-12 09:01:31.690353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:56.560 [2024-07-12 09:01:31.690387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:56.560 [2024-07-12 09:01:31.690406] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:56.560 [2024-07-12 09:01:31.690846] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:56.560 [2024-07-12 09:01:31.690889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:56.560 [2024-07-12 09:01:31.690971] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:56.560 [2024-07-12 09:01:31.691001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:56.560 pt2 00:34:56.560 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:56.819 [2024-07-12 09:01:31.924460] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.819 09:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.077 09:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:57.077 "name": "raid_bdev1", 00:34:57.077 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:57.077 "strip_size_kb": 64, 00:34:57.077 "state": "configuring", 00:34:57.077 "raid_level": "raid5f", 00:34:57.077 "superblock": true, 00:34:57.077 "num_base_bdevs": 3, 00:34:57.077 "num_base_bdevs_discovered": 1, 00:34:57.077 "num_base_bdevs_operational": 3, 00:34:57.077 "base_bdevs_list": [ 00:34:57.077 { 00:34:57.077 "name": "pt1", 00:34:57.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:57.077 "is_configured": true, 00:34:57.077 "data_offset": 2048, 00:34:57.077 "data_size": 63488 00:34:57.077 }, 00:34:57.077 { 00:34:57.077 "name": null, 00:34:57.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:57.077 "is_configured": false, 00:34:57.077 "data_offset": 2048, 00:34:57.077 "data_size": 63488 00:34:57.077 }, 00:34:57.078 { 00:34:57.078 "name": null, 00:34:57.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:57.078 "is_configured": false, 00:34:57.078 "data_offset": 2048, 00:34:57.078 "data_size": 63488 00:34:57.078 } 00:34:57.078 ] 00:34:57.078 }' 00:34:57.078 09:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:57.078 09:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.644 09:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:34:57.644 09:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:57.644 09:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:57.902 [2024-07-12 09:01:33.084647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:57.902 [2024-07-12 09:01:33.084729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.902 [2024-07-12 09:01:33.084759] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:57.902 [2024-07-12 09:01:33.084783] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.902 [2024-07-12 09:01:33.085209] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.902 [2024-07-12 09:01:33.085255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:57.902 [2024-07-12 09:01:33.085342] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:57.902 [2024-07-12 09:01:33.085367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:57.902 pt2 00:34:57.902 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:34:57.902 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:57.902 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:58.160 [2024-07-12 09:01:33.284726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:58.160 [2024-07-12 09:01:33.284782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:58.160 [2024-07-12 09:01:33.284806] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:34:58.160 [2024-07-12 09:01:33.284826] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:58.160 [2024-07-12 09:01:33.285242] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:58.160 [2024-07-12 09:01:33.285288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:58.160 [2024-07-12 09:01:33.285405] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:58.160 [2024-07-12 09:01:33.285431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:58.160 [2024-07-12 09:01:33.285557] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:34:58.160 [2024-07-12 09:01:33.285582] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:58.160 [2024-07-12 09:01:33.285668] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:58.160 [2024-07-12 09:01:33.289767] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:34:58.160 [2024-07-12 09:01:33.289792] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:34:58.160 pt3 00:34:58.160 [2024-07-12 09:01:33.289966] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.160 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.418 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:58.418 "name": "raid_bdev1", 00:34:58.418 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:58.418 "strip_size_kb": 64, 00:34:58.418 "state": "online", 00:34:58.418 "raid_level": "raid5f", 00:34:58.418 "superblock": true, 00:34:58.418 "num_base_bdevs": 3, 00:34:58.418 "num_base_bdevs_discovered": 3, 00:34:58.418 "num_base_bdevs_operational": 3, 00:34:58.418 "base_bdevs_list": [ 00:34:58.418 { 00:34:58.418 "name": "pt1", 00:34:58.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:58.418 "is_configured": true, 00:34:58.418 "data_offset": 2048, 00:34:58.418 "data_size": 63488 00:34:58.418 }, 00:34:58.418 { 00:34:58.418 "name": "pt2", 00:34:58.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:58.418 "is_configured": true, 00:34:58.418 "data_offset": 2048, 00:34:58.418 "data_size": 63488 00:34:58.418 }, 00:34:58.418 { 00:34:58.418 "name": "pt3", 00:34:58.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:58.418 "is_configured": true, 00:34:58.418 "data_offset": 2048, 00:34:58.418 "data_size": 63488 00:34:58.418 } 00:34:58.418 ] 00:34:58.418 }' 00:34:58.418 09:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:58.418 09:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:58.985 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:59.242 [2024-07-12 09:01:34.346942] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:59.242 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:59.242 "name": "raid_bdev1", 00:34:59.242 "aliases": [ 00:34:59.242 "a22cf4a2-2b83-4c93-8186-6d97fe169fe8" 00:34:59.243 ], 00:34:59.243 "product_name": "Raid Volume", 00:34:59.243 "block_size": 512, 00:34:59.243 "num_blocks": 126976, 00:34:59.243 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:59.243 "assigned_rate_limits": { 00:34:59.243 "rw_ios_per_sec": 0, 00:34:59.243 "rw_mbytes_per_sec": 0, 00:34:59.243 "r_mbytes_per_sec": 0, 00:34:59.243 "w_mbytes_per_sec": 0 00:34:59.243 }, 00:34:59.243 "claimed": false, 00:34:59.243 "zoned": false, 00:34:59.243 "supported_io_types": { 00:34:59.243 "read": true, 00:34:59.243 "write": true, 00:34:59.243 "unmap": false, 00:34:59.243 "flush": false, 00:34:59.243 "reset": true, 00:34:59.243 "nvme_admin": false, 00:34:59.243 "nvme_io": false, 00:34:59.243 "nvme_io_md": false, 00:34:59.243 "write_zeroes": true, 00:34:59.243 "zcopy": false, 00:34:59.243 "get_zone_info": false, 00:34:59.243 "zone_management": false, 00:34:59.243 "zone_append": false, 00:34:59.243 "compare": false, 00:34:59.243 "compare_and_write": false, 00:34:59.243 "abort": false, 00:34:59.243 "seek_hole": false, 00:34:59.243 "seek_data": false, 00:34:59.243 "copy": false, 00:34:59.243 "nvme_iov_md": false 00:34:59.243 }, 00:34:59.243 "driver_specific": { 00:34:59.243 "raid": { 00:34:59.243 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:34:59.243 "strip_size_kb": 64, 00:34:59.243 "state": "online", 00:34:59.243 "raid_level": "raid5f", 00:34:59.243 "superblock": true, 00:34:59.243 "num_base_bdevs": 3, 00:34:59.243 "num_base_bdevs_discovered": 3, 00:34:59.243 "num_base_bdevs_operational": 3, 00:34:59.243 "base_bdevs_list": [ 00:34:59.243 { 00:34:59.243 "name": "pt1", 00:34:59.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:59.243 "is_configured": true, 00:34:59.243 "data_offset": 2048, 00:34:59.243 "data_size": 63488 00:34:59.243 }, 00:34:59.243 { 00:34:59.243 "name": "pt2", 00:34:59.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:59.243 "is_configured": true, 00:34:59.243 "data_offset": 2048, 00:34:59.243 "data_size": 63488 00:34:59.243 }, 00:34:59.243 { 00:34:59.243 "name": "pt3", 00:34:59.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:59.243 "is_configured": true, 00:34:59.243 "data_offset": 2048, 00:34:59.243 "data_size": 63488 00:34:59.243 } 00:34:59.243 ] 00:34:59.243 } 00:34:59.243 } 00:34:59.243 }' 00:34:59.243 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:59.243 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:59.243 pt2 00:34:59.243 pt3' 00:34:59.243 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:59.243 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:59.243 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:59.501 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:59.501 "name": "pt1", 00:34:59.501 "aliases": [ 00:34:59.501 "00000000-0000-0000-0000-000000000001" 00:34:59.501 ], 00:34:59.501 "product_name": "passthru", 00:34:59.501 "block_size": 512, 00:34:59.501 "num_blocks": 65536, 00:34:59.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:59.501 "assigned_rate_limits": { 00:34:59.501 "rw_ios_per_sec": 0, 00:34:59.501 "rw_mbytes_per_sec": 0, 00:34:59.501 "r_mbytes_per_sec": 0, 00:34:59.501 "w_mbytes_per_sec": 0 00:34:59.501 }, 00:34:59.501 "claimed": true, 00:34:59.501 "claim_type": "exclusive_write", 00:34:59.501 "zoned": false, 00:34:59.501 "supported_io_types": { 00:34:59.501 "read": true, 00:34:59.501 "write": true, 00:34:59.501 "unmap": true, 00:34:59.501 "flush": true, 00:34:59.501 "reset": true, 00:34:59.501 "nvme_admin": false, 00:34:59.501 "nvme_io": false, 00:34:59.501 "nvme_io_md": false, 00:34:59.501 "write_zeroes": true, 00:34:59.501 "zcopy": true, 00:34:59.501 "get_zone_info": false, 00:34:59.501 "zone_management": false, 00:34:59.501 "zone_append": false, 00:34:59.501 "compare": false, 00:34:59.501 "compare_and_write": false, 00:34:59.501 "abort": true, 00:34:59.501 "seek_hole": false, 00:34:59.501 "seek_data": false, 00:34:59.501 "copy": true, 00:34:59.501 "nvme_iov_md": false 00:34:59.501 }, 00:34:59.501 "memory_domains": [ 00:34:59.501 { 00:34:59.501 "dma_device_id": "system", 00:34:59.501 "dma_device_type": 1 00:34:59.501 }, 00:34:59.501 { 00:34:59.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.501 "dma_device_type": 2 00:34:59.501 } 00:34:59.501 ], 00:34:59.501 "driver_specific": { 00:34:59.501 "passthru": { 00:34:59.501 "name": "pt1", 00:34:59.502 "base_bdev_name": "malloc1" 00:34:59.502 } 00:34:59.502 } 00:34:59.502 }' 00:34:59.502 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:59.760 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:59.760 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:59.760 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:59.760 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:59.760 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:59.760 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:59.760 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:00.017 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:00.017 09:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:00.017 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:00.017 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:00.017 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:00.017 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:00.017 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:00.274 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:00.274 "name": "pt2", 00:35:00.274 "aliases": [ 00:35:00.274 "00000000-0000-0000-0000-000000000002" 00:35:00.274 ], 00:35:00.274 "product_name": "passthru", 00:35:00.274 "block_size": 512, 00:35:00.274 "num_blocks": 65536, 00:35:00.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:00.274 "assigned_rate_limits": { 00:35:00.274 "rw_ios_per_sec": 0, 00:35:00.274 "rw_mbytes_per_sec": 0, 00:35:00.274 "r_mbytes_per_sec": 0, 00:35:00.274 "w_mbytes_per_sec": 0 00:35:00.274 }, 00:35:00.274 "claimed": true, 00:35:00.274 "claim_type": "exclusive_write", 00:35:00.274 "zoned": false, 00:35:00.274 "supported_io_types": { 00:35:00.274 "read": true, 00:35:00.274 "write": true, 00:35:00.274 "unmap": true, 00:35:00.274 "flush": true, 00:35:00.274 "reset": true, 00:35:00.274 "nvme_admin": false, 00:35:00.274 "nvme_io": false, 00:35:00.274 "nvme_io_md": false, 00:35:00.274 "write_zeroes": true, 00:35:00.274 "zcopy": true, 00:35:00.274 "get_zone_info": false, 00:35:00.274 "zone_management": false, 00:35:00.274 "zone_append": false, 00:35:00.274 "compare": false, 00:35:00.274 "compare_and_write": false, 00:35:00.274 "abort": true, 00:35:00.274 "seek_hole": false, 00:35:00.274 "seek_data": false, 00:35:00.274 "copy": true, 00:35:00.274 "nvme_iov_md": false 00:35:00.274 }, 00:35:00.274 "memory_domains": [ 00:35:00.274 { 00:35:00.274 "dma_device_id": "system", 00:35:00.274 "dma_device_type": 1 00:35:00.274 }, 00:35:00.274 { 00:35:00.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.275 "dma_device_type": 2 00:35:00.275 } 00:35:00.275 ], 00:35:00.275 "driver_specific": { 00:35:00.275 "passthru": { 00:35:00.275 "name": "pt2", 00:35:00.275 "base_bdev_name": "malloc2" 00:35:00.275 } 00:35:00.275 } 00:35:00.275 }' 00:35:00.275 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:00.275 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:00.275 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:00.275 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:00.275 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:35:00.533 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:01.100 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:01.100 "name": "pt3", 00:35:01.100 "aliases": [ 00:35:01.100 "00000000-0000-0000-0000-000000000003" 00:35:01.100 ], 00:35:01.100 "product_name": "passthru", 00:35:01.100 "block_size": 512, 00:35:01.100 "num_blocks": 65536, 00:35:01.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:01.100 "assigned_rate_limits": { 00:35:01.100 "rw_ios_per_sec": 0, 00:35:01.100 "rw_mbytes_per_sec": 0, 00:35:01.100 "r_mbytes_per_sec": 0, 00:35:01.100 "w_mbytes_per_sec": 0 00:35:01.100 }, 00:35:01.100 "claimed": true, 00:35:01.100 "claim_type": "exclusive_write", 00:35:01.100 "zoned": false, 00:35:01.100 "supported_io_types": { 00:35:01.100 "read": true, 00:35:01.100 "write": true, 00:35:01.100 "unmap": true, 00:35:01.100 "flush": true, 00:35:01.100 "reset": true, 00:35:01.100 "nvme_admin": false, 00:35:01.100 "nvme_io": false, 00:35:01.100 "nvme_io_md": false, 00:35:01.100 "write_zeroes": true, 00:35:01.100 "zcopy": true, 00:35:01.100 "get_zone_info": false, 00:35:01.100 "zone_management": false, 00:35:01.100 "zone_append": false, 00:35:01.100 "compare": false, 00:35:01.100 "compare_and_write": false, 00:35:01.100 "abort": true, 00:35:01.100 "seek_hole": false, 00:35:01.100 "seek_data": false, 00:35:01.100 "copy": true, 00:35:01.100 "nvme_iov_md": false 00:35:01.100 }, 00:35:01.100 "memory_domains": [ 00:35:01.100 { 00:35:01.100 "dma_device_id": "system", 00:35:01.100 "dma_device_type": 1 00:35:01.100 }, 00:35:01.100 { 00:35:01.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:01.100 "dma_device_type": 2 00:35:01.100 } 00:35:01.100 ], 00:35:01.100 "driver_specific": { 00:35:01.100 "passthru": { 00:35:01.100 "name": "pt3", 00:35:01.100 "base_bdev_name": "malloc3" 00:35:01.100 } 00:35:01.100 } 00:35:01.100 }' 00:35:01.100 09:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:01.100 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:01.100 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:35:01.100 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:01.100 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:01.100 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:01.100 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:01.100 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:01.361 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:01.361 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:01.361 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:01.361 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:01.361 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:01.361 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:35:01.620 [2024-07-12 09:01:36.705298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:01.620 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a22cf4a2-2b83-4c93-8186-6d97fe169fe8 '!=' a22cf4a2-2b83-4c93-8186-6d97fe169fe8 ']' 00:35:01.620 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:35:01.620 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:01.620 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:35:01.620 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:01.877 [2024-07-12 09:01:36.941104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:01.877 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:01.878 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:01.878 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.878 09:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:02.135 09:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:02.135 "name": "raid_bdev1", 00:35:02.135 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:35:02.135 "strip_size_kb": 64, 00:35:02.135 "state": "online", 00:35:02.135 "raid_level": "raid5f", 00:35:02.135 "superblock": true, 00:35:02.135 "num_base_bdevs": 3, 00:35:02.135 "num_base_bdevs_discovered": 2, 00:35:02.135 "num_base_bdevs_operational": 2, 00:35:02.135 "base_bdevs_list": [ 00:35:02.135 { 00:35:02.135 "name": null, 00:35:02.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.135 "is_configured": false, 00:35:02.135 "data_offset": 2048, 00:35:02.135 "data_size": 63488 00:35:02.135 }, 00:35:02.135 { 00:35:02.135 "name": "pt2", 00:35:02.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:02.135 "is_configured": true, 00:35:02.135 "data_offset": 2048, 00:35:02.135 "data_size": 63488 00:35:02.135 }, 00:35:02.135 { 00:35:02.135 "name": "pt3", 00:35:02.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:02.135 "is_configured": true, 00:35:02.135 "data_offset": 2048, 00:35:02.135 "data_size": 63488 00:35:02.135 } 00:35:02.135 ] 00:35:02.135 }' 00:35:02.135 09:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:02.135 09:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.705 09:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:02.963 [2024-07-12 09:01:38.129348] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:02.963 [2024-07-12 09:01:38.129390] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:02.963 [2024-07-12 09:01:38.129507] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:02.963 [2024-07-12 09:01:38.129616] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:02.963 [2024-07-12 09:01:38.129640] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:35:02.963 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.963 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:35:03.220 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:35:03.220 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:35:03.220 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:35:03.220 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:03.220 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:03.479 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:03.479 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:03.479 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:03.737 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:03.737 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:03.737 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:35:03.737 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:03.737 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:03.996 [2024-07-12 09:01:38.985462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:03.996 [2024-07-12 09:01:38.985576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:03.996 [2024-07-12 09:01:38.985620] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:03.996 [2024-07-12 09:01:38.985643] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:03.996 [2024-07-12 09:01:38.988071] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:03.996 [2024-07-12 09:01:38.988118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:03.996 [2024-07-12 09:01:38.988255] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:03.996 [2024-07-12 09:01:38.988332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:03.996 pt2 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.996 09:01:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:04.254 09:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:04.254 "name": "raid_bdev1", 00:35:04.254 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:35:04.254 "strip_size_kb": 64, 00:35:04.254 "state": "configuring", 00:35:04.254 "raid_level": "raid5f", 00:35:04.254 "superblock": true, 00:35:04.254 "num_base_bdevs": 3, 00:35:04.254 "num_base_bdevs_discovered": 1, 00:35:04.254 "num_base_bdevs_operational": 2, 00:35:04.254 "base_bdevs_list": [ 00:35:04.254 { 00:35:04.254 "name": null, 00:35:04.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.254 "is_configured": false, 00:35:04.254 "data_offset": 2048, 00:35:04.254 "data_size": 63488 00:35:04.254 }, 00:35:04.254 { 00:35:04.254 "name": "pt2", 00:35:04.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:04.254 "is_configured": true, 00:35:04.254 "data_offset": 2048, 00:35:04.254 "data_size": 63488 00:35:04.254 }, 00:35:04.254 { 00:35:04.254 "name": null, 00:35:04.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:04.254 "is_configured": false, 00:35:04.254 "data_offset": 2048, 00:35:04.254 "data_size": 63488 00:35:04.254 } 00:35:04.254 ] 00:35:04.254 }' 00:35:04.254 09:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:04.254 09:01:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.820 09:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:35:04.820 09:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:04.820 09:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:35:04.820 09:01:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:05.078 [2024-07-12 09:01:40.021694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:05.078 [2024-07-12 09:01:40.021824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:05.078 [2024-07-12 09:01:40.021873] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:05.078 [2024-07-12 09:01:40.021903] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:05.078 [2024-07-12 09:01:40.022432] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:05.078 [2024-07-12 09:01:40.022470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:05.078 [2024-07-12 09:01:40.022586] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:05.078 [2024-07-12 09:01:40.022618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:05.078 [2024-07-12 09:01:40.022788] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:35:05.078 [2024-07-12 09:01:40.022810] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:05.078 [2024-07-12 09:01:40.022919] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:05.078 [2024-07-12 09:01:40.027166] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:35:05.078 [2024-07-12 09:01:40.027188] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:35:05.078 pt3 00:35:05.078 [2024-07-12 09:01:40.027491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.078 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.340 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:05.340 "name": "raid_bdev1", 00:35:05.340 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:35:05.340 "strip_size_kb": 64, 00:35:05.340 "state": "online", 00:35:05.340 "raid_level": "raid5f", 00:35:05.340 "superblock": true, 00:35:05.340 "num_base_bdevs": 3, 00:35:05.340 "num_base_bdevs_discovered": 2, 00:35:05.340 "num_base_bdevs_operational": 2, 00:35:05.340 "base_bdevs_list": [ 00:35:05.340 { 00:35:05.340 "name": null, 00:35:05.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.340 "is_configured": false, 00:35:05.340 "data_offset": 2048, 00:35:05.340 "data_size": 63488 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "name": "pt2", 00:35:05.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:05.340 "is_configured": true, 00:35:05.340 "data_offset": 2048, 00:35:05.340 "data_size": 63488 00:35:05.340 }, 00:35:05.340 { 00:35:05.340 "name": "pt3", 00:35:05.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:05.340 "is_configured": true, 00:35:05.340 "data_offset": 2048, 00:35:05.340 "data_size": 63488 00:35:05.340 } 00:35:05.340 ] 00:35:05.340 }' 00:35:05.340 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:05.340 09:01:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.904 09:01:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:06.163 [2024-07-12 09:01:41.173323] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:06.163 [2024-07-12 09:01:41.173364] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:06.163 [2024-07-12 09:01:41.173460] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:06.163 [2024-07-12 09:01:41.173568] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:06.163 [2024-07-12 09:01:41.173587] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:35:06.163 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.163 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:06.422 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:06.422 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:06.422 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:35:06.422 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:35:06.422 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:06.681 [2024-07-12 09:01:41.801369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:06.681 [2024-07-12 09:01:41.801446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:06.681 [2024-07-12 09:01:41.801492] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:35:06.681 [2024-07-12 09:01:41.801515] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:06.681 [2024-07-12 09:01:41.803840] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:06.681 [2024-07-12 09:01:41.803899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:06.681 [2024-07-12 09:01:41.803998] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:06.681 [2024-07-12 09:01:41.804050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:06.681 [2024-07-12 09:01:41.804245] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:06.681 [2024-07-12 09:01:41.804286] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:06.681 [2024-07-12 09:01:41.804318] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:35:06.681 [2024-07-12 09:01:41.804394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:06.681 pt1 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.681 09:01:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.945 09:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:06.945 "name": "raid_bdev1", 00:35:06.945 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:35:06.945 "strip_size_kb": 64, 00:35:06.945 "state": "configuring", 00:35:06.945 "raid_level": "raid5f", 00:35:06.945 "superblock": true, 00:35:06.945 "num_base_bdevs": 3, 00:35:06.945 "num_base_bdevs_discovered": 1, 00:35:06.945 "num_base_bdevs_operational": 2, 00:35:06.945 "base_bdevs_list": [ 00:35:06.945 { 00:35:06.945 "name": null, 00:35:06.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.945 "is_configured": false, 00:35:06.945 "data_offset": 2048, 00:35:06.945 "data_size": 63488 00:35:06.945 }, 00:35:06.945 { 00:35:06.945 "name": "pt2", 00:35:06.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:06.945 "is_configured": true, 00:35:06.945 "data_offset": 2048, 00:35:06.945 "data_size": 63488 00:35:06.945 }, 00:35:06.945 { 00:35:06.945 "name": null, 00:35:06.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:06.945 "is_configured": false, 00:35:06.945 "data_offset": 2048, 00:35:06.945 "data_size": 63488 00:35:06.945 } 00:35:06.945 ] 00:35:06.945 }' 00:35:06.945 09:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:06.945 09:01:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.538 09:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:35:07.538 09:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:07.795 09:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:35:07.795 09:01:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:08.053 [2024-07-12 09:01:43.125595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:08.053 [2024-07-12 09:01:43.125668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:08.053 [2024-07-12 09:01:43.125708] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:35:08.053 [2024-07-12 09:01:43.125737] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:08.053 [2024-07-12 09:01:43.126207] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:08.053 [2024-07-12 09:01:43.126256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:08.053 [2024-07-12 09:01:43.126345] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:08.053 [2024-07-12 09:01:43.126372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:08.053 [2024-07-12 09:01:43.126498] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:35:08.053 [2024-07-12 09:01:43.126521] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:08.053 [2024-07-12 09:01:43.126645] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:08.053 [2024-07-12 09:01:43.130649] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:35:08.053 [2024-07-12 09:01:43.130676] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:35:08.053 pt3 00:35:08.053 [2024-07-12 09:01:43.130891] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.053 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.311 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:08.311 "name": "raid_bdev1", 00:35:08.311 "uuid": "a22cf4a2-2b83-4c93-8186-6d97fe169fe8", 00:35:08.311 "strip_size_kb": 64, 00:35:08.311 "state": "online", 00:35:08.311 "raid_level": "raid5f", 00:35:08.311 "superblock": true, 00:35:08.311 "num_base_bdevs": 3, 00:35:08.311 "num_base_bdevs_discovered": 2, 00:35:08.311 "num_base_bdevs_operational": 2, 00:35:08.311 "base_bdevs_list": [ 00:35:08.311 { 00:35:08.311 "name": null, 00:35:08.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.311 "is_configured": false, 00:35:08.311 "data_offset": 2048, 00:35:08.311 "data_size": 63488 00:35:08.311 }, 00:35:08.311 { 00:35:08.311 "name": "pt2", 00:35:08.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:08.311 "is_configured": true, 00:35:08.311 "data_offset": 2048, 00:35:08.311 "data_size": 63488 00:35:08.311 }, 00:35:08.311 { 00:35:08.311 "name": "pt3", 00:35:08.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:08.311 "is_configured": true, 00:35:08.311 "data_offset": 2048, 00:35:08.311 "data_size": 63488 00:35:08.311 } 00:35:08.311 ] 00:35:08.311 }' 00:35:08.311 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:08.311 09:01:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:08.877 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:08.877 09:01:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:09.144 09:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:09.144 09:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:09.144 09:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:09.401 [2024-07-12 09:01:44.439709] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' a22cf4a2-2b83-4c93-8186-6d97fe169fe8 '!=' a22cf4a2-2b83-4c93-8186-6d97fe169fe8 ']' 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 154616 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 154616 ']' 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 154616 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154616 00:35:09.401 killing process with pid 154616 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154616' 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 154616 00:35:09.401 09:01:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 154616 00:35:09.401 [2024-07-12 09:01:44.475201] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:09.401 [2024-07-12 09:01:44.475258] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:09.401 [2024-07-12 09:01:44.475347] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:09.401 [2024-07-12 09:01:44.475375] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:35:09.659 [2024-07-12 09:01:44.665838] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:10.592 09:01:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:35:10.592 ************************************ 00:35:10.592 END TEST raid5f_superblock_test 00:35:10.592 ************************************ 00:35:10.592 00:35:10.592 real 0m23.206s 00:35:10.592 user 0m43.492s 00:35:10.592 sys 0m2.571s 00:35:10.592 09:01:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:10.592 09:01:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:10.592 09:01:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:10.592 09:01:45 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:35:10.592 09:01:45 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:35:10.592 09:01:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:10.592 09:01:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:10.592 09:01:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:10.592 ************************************ 00:35:10.592 START TEST raid5f_rebuild_test 00:35:10.592 ************************************ 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 false false true 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:10.592 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=155379 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 155379 /var/tmp/spdk-raid.sock 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 155379 ']' 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:10.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:10.593 09:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:10.593 [2024-07-12 09:01:45.715623] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:35:10.593 [2024-07-12 09:01:45.715829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155379 ] 00:35:10.593 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:10.593 Zero copy mechanism will not be used. 00:35:10.851 [2024-07-12 09:01:45.883718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.851 [2024-07-12 09:01:46.046215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.109 [2024-07-12 09:01:46.210685] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:11.675 09:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:11.675 09:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:35:11.675 09:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:11.675 09:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:11.935 BaseBdev1_malloc 00:35:11.935 09:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:12.193 [2024-07-12 09:01:47.213371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:12.193 [2024-07-12 09:01:47.213605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:12.193 [2024-07-12 09:01:47.213785] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:35:12.193 [2024-07-12 09:01:47.213908] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:12.193 [2024-07-12 09:01:47.215884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:12.193 [2024-07-12 09:01:47.216047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:12.193 BaseBdev1 00:35:12.193 09:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:12.193 09:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:12.450 BaseBdev2_malloc 00:35:12.450 09:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:12.708 [2024-07-12 09:01:47.741228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:12.708 [2024-07-12 09:01:47.741480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:12.708 [2024-07-12 09:01:47.741626] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:35:12.708 [2024-07-12 09:01:47.741735] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:12.708 [2024-07-12 09:01:47.743786] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:12.708 [2024-07-12 09:01:47.743957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:12.708 BaseBdev2 00:35:12.708 09:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:12.708 09:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:12.966 BaseBdev3_malloc 00:35:12.966 09:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:13.224 [2024-07-12 09:01:48.226904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:13.224 [2024-07-12 09:01:48.227158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:13.224 [2024-07-12 09:01:48.227300] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:13.224 [2024-07-12 09:01:48.227415] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:13.224 [2024-07-12 09:01:48.229447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:13.224 [2024-07-12 09:01:48.229617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:13.224 BaseBdev3 00:35:13.224 09:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:13.481 spare_malloc 00:35:13.481 09:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:13.738 spare_delay 00:35:13.738 09:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:13.738 [2024-07-12 09:01:48.876132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:13.738 [2024-07-12 09:01:48.876375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:13.738 [2024-07-12 09:01:48.876514] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:35:13.738 [2024-07-12 09:01:48.876646] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:13.738 [2024-07-12 09:01:48.878647] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:13.738 [2024-07-12 09:01:48.878863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:13.738 spare 00:35:13.738 09:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:35:13.995 [2024-07-12 09:01:49.068216] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:13.995 [2024-07-12 09:01:49.070194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:13.995 [2024-07-12 09:01:49.070379] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:13.995 [2024-07-12 09:01:49.070532] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:35:13.995 [2024-07-12 09:01:49.070585] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:35:13.995 [2024-07-12 09:01:49.070803] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:35:13.995 [2024-07-12 09:01:49.075384] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:35:13.995 [2024-07-12 09:01:49.075518] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:35:13.995 [2024-07-12 09:01:49.075785] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:13.995 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:14.252 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:14.252 "name": "raid_bdev1", 00:35:14.252 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:14.252 "strip_size_kb": 64, 00:35:14.252 "state": "online", 00:35:14.252 "raid_level": "raid5f", 00:35:14.252 "superblock": false, 00:35:14.252 "num_base_bdevs": 3, 00:35:14.252 "num_base_bdevs_discovered": 3, 00:35:14.252 "num_base_bdevs_operational": 3, 00:35:14.252 "base_bdevs_list": [ 00:35:14.252 { 00:35:14.252 "name": "BaseBdev1", 00:35:14.252 "uuid": "37364366-db0d-55ab-9b5e-4c2f16b479ec", 00:35:14.252 "is_configured": true, 00:35:14.252 "data_offset": 0, 00:35:14.252 "data_size": 65536 00:35:14.252 }, 00:35:14.252 { 00:35:14.252 "name": "BaseBdev2", 00:35:14.252 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:14.252 "is_configured": true, 00:35:14.252 "data_offset": 0, 00:35:14.252 "data_size": 65536 00:35:14.252 }, 00:35:14.252 { 00:35:14.252 "name": "BaseBdev3", 00:35:14.253 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:14.253 "is_configured": true, 00:35:14.253 "data_offset": 0, 00:35:14.253 "data_size": 65536 00:35:14.253 } 00:35:14.253 ] 00:35:14.253 }' 00:35:14.253 09:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:14.253 09:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.184 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:15.184 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:15.184 [2024-07-12 09:01:50.292858] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:15.184 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:35:15.184 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:15.184 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:15.441 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:15.699 [2024-07-12 09:01:50.728857] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:15.699 /dev/nbd0 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:15.699 1+0 records in 00:35:15.699 1+0 records out 00:35:15.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234816 s, 17.4 MB/s 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:35:15.699 09:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:35:16.263 512+0 records in 00:35:16.263 512+0 records out 00:35:16.263 67108864 bytes (67 MB, 64 MiB) copied, 0.399849 s, 168 MB/s 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:16.263 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:16.263 [2024-07-12 09:01:51.414688] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:16.521 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:16.521 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:16.521 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:16.521 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:16.521 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:16.521 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:16.780 [2024-07-12 09:01:51.752691] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:16.780 "name": "raid_bdev1", 00:35:16.780 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:16.780 "strip_size_kb": 64, 00:35:16.780 "state": "online", 00:35:16.780 "raid_level": "raid5f", 00:35:16.780 "superblock": false, 00:35:16.780 "num_base_bdevs": 3, 00:35:16.780 "num_base_bdevs_discovered": 2, 00:35:16.780 "num_base_bdevs_operational": 2, 00:35:16.780 "base_bdevs_list": [ 00:35:16.780 { 00:35:16.780 "name": null, 00:35:16.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:16.780 "is_configured": false, 00:35:16.780 "data_offset": 0, 00:35:16.780 "data_size": 65536 00:35:16.780 }, 00:35:16.780 { 00:35:16.780 "name": "BaseBdev2", 00:35:16.780 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:16.780 "is_configured": true, 00:35:16.780 "data_offset": 0, 00:35:16.780 "data_size": 65536 00:35:16.780 }, 00:35:16.780 { 00:35:16.780 "name": "BaseBdev3", 00:35:16.780 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:16.780 "is_configured": true, 00:35:16.780 "data_offset": 0, 00:35:16.780 "data_size": 65536 00:35:16.780 } 00:35:16.780 ] 00:35:16.780 }' 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:16.780 09:01:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.712 09:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:17.969 [2024-07-12 09:01:52.937125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:17.969 [2024-07-12 09:01:52.947818] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cee0 00:35:17.969 [2024-07-12 09:01:52.953256] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:17.969 09:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:18.901 09:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:18.901 09:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:18.901 09:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:18.901 09:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:18.901 09:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:18.901 09:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.901 09:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.159 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:19.159 "name": "raid_bdev1", 00:35:19.159 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:19.159 "strip_size_kb": 64, 00:35:19.159 "state": "online", 00:35:19.159 "raid_level": "raid5f", 00:35:19.159 "superblock": false, 00:35:19.159 "num_base_bdevs": 3, 00:35:19.159 "num_base_bdevs_discovered": 3, 00:35:19.159 "num_base_bdevs_operational": 3, 00:35:19.159 "process": { 00:35:19.159 "type": "rebuild", 00:35:19.159 "target": "spare", 00:35:19.159 "progress": { 00:35:19.159 "blocks": 24576, 00:35:19.159 "percent": 18 00:35:19.159 } 00:35:19.159 }, 00:35:19.159 "base_bdevs_list": [ 00:35:19.159 { 00:35:19.159 "name": "spare", 00:35:19.159 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:19.159 "is_configured": true, 00:35:19.159 "data_offset": 0, 00:35:19.159 "data_size": 65536 00:35:19.159 }, 00:35:19.159 { 00:35:19.159 "name": "BaseBdev2", 00:35:19.159 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:19.159 "is_configured": true, 00:35:19.159 "data_offset": 0, 00:35:19.159 "data_size": 65536 00:35:19.159 }, 00:35:19.159 { 00:35:19.159 "name": "BaseBdev3", 00:35:19.159 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:19.159 "is_configured": true, 00:35:19.159 "data_offset": 0, 00:35:19.159 "data_size": 65536 00:35:19.159 } 00:35:19.159 ] 00:35:19.159 }' 00:35:19.159 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:19.159 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:19.159 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:19.159 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:19.159 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:19.417 [2024-07-12 09:01:54.488908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:19.417 [2024-07-12 09:01:54.566594] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:19.417 [2024-07-12 09:01:54.566663] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:19.417 [2024-07-12 09:01:54.566681] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:19.417 [2024-07-12 09:01:54.566688] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.417 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.675 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:19.675 "name": "raid_bdev1", 00:35:19.675 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:19.675 "strip_size_kb": 64, 00:35:19.675 "state": "online", 00:35:19.675 "raid_level": "raid5f", 00:35:19.675 "superblock": false, 00:35:19.675 "num_base_bdevs": 3, 00:35:19.675 "num_base_bdevs_discovered": 2, 00:35:19.675 "num_base_bdevs_operational": 2, 00:35:19.675 "base_bdevs_list": [ 00:35:19.675 { 00:35:19.675 "name": null, 00:35:19.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:19.675 "is_configured": false, 00:35:19.675 "data_offset": 0, 00:35:19.675 "data_size": 65536 00:35:19.675 }, 00:35:19.675 { 00:35:19.675 "name": "BaseBdev2", 00:35:19.675 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:19.675 "is_configured": true, 00:35:19.675 "data_offset": 0, 00:35:19.675 "data_size": 65536 00:35:19.675 }, 00:35:19.675 { 00:35:19.675 "name": "BaseBdev3", 00:35:19.675 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:19.675 "is_configured": true, 00:35:19.675 "data_offset": 0, 00:35:19.675 "data_size": 65536 00:35:19.675 } 00:35:19.675 ] 00:35:19.675 }' 00:35:19.675 09:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:19.675 09:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:20.609 "name": "raid_bdev1", 00:35:20.609 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:20.609 "strip_size_kb": 64, 00:35:20.609 "state": "online", 00:35:20.609 "raid_level": "raid5f", 00:35:20.609 "superblock": false, 00:35:20.609 "num_base_bdevs": 3, 00:35:20.609 "num_base_bdevs_discovered": 2, 00:35:20.609 "num_base_bdevs_operational": 2, 00:35:20.609 "base_bdevs_list": [ 00:35:20.609 { 00:35:20.609 "name": null, 00:35:20.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.609 "is_configured": false, 00:35:20.609 "data_offset": 0, 00:35:20.609 "data_size": 65536 00:35:20.609 }, 00:35:20.609 { 00:35:20.609 "name": "BaseBdev2", 00:35:20.609 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:20.609 "is_configured": true, 00:35:20.609 "data_offset": 0, 00:35:20.609 "data_size": 65536 00:35:20.609 }, 00:35:20.609 { 00:35:20.609 "name": "BaseBdev3", 00:35:20.609 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:20.609 "is_configured": true, 00:35:20.609 "data_offset": 0, 00:35:20.609 "data_size": 65536 00:35:20.609 } 00:35:20.609 ] 00:35:20.609 }' 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:20.609 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:20.867 [2024-07-12 09:01:55.966421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:20.867 [2024-07-12 09:01:55.976887] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d080 00:35:20.867 [2024-07-12 09:01:55.982291] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:20.867 09:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:21.816 09:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:21.816 09:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:21.816 09:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:21.816 09:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:21.816 09:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:21.816 09:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.816 09:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.075 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:22.075 "name": "raid_bdev1", 00:35:22.075 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:22.075 "strip_size_kb": 64, 00:35:22.075 "state": "online", 00:35:22.075 "raid_level": "raid5f", 00:35:22.075 "superblock": false, 00:35:22.075 "num_base_bdevs": 3, 00:35:22.075 "num_base_bdevs_discovered": 3, 00:35:22.075 "num_base_bdevs_operational": 3, 00:35:22.075 "process": { 00:35:22.075 "type": "rebuild", 00:35:22.075 "target": "spare", 00:35:22.075 "progress": { 00:35:22.075 "blocks": 24576, 00:35:22.075 "percent": 18 00:35:22.075 } 00:35:22.075 }, 00:35:22.075 "base_bdevs_list": [ 00:35:22.075 { 00:35:22.075 "name": "spare", 00:35:22.075 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:22.075 "is_configured": true, 00:35:22.075 "data_offset": 0, 00:35:22.075 "data_size": 65536 00:35:22.075 }, 00:35:22.075 { 00:35:22.075 "name": "BaseBdev2", 00:35:22.075 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:22.075 "is_configured": true, 00:35:22.075 "data_offset": 0, 00:35:22.075 "data_size": 65536 00:35:22.075 }, 00:35:22.075 { 00:35:22.075 "name": "BaseBdev3", 00:35:22.075 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:22.075 "is_configured": true, 00:35:22.075 "data_offset": 0, 00:35:22.075 "data_size": 65536 00:35:22.075 } 00:35:22.075 ] 00:35:22.075 }' 00:35:22.075 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1216 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.334 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.592 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:22.592 "name": "raid_bdev1", 00:35:22.592 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:22.592 "strip_size_kb": 64, 00:35:22.592 "state": "online", 00:35:22.592 "raid_level": "raid5f", 00:35:22.592 "superblock": false, 00:35:22.592 "num_base_bdevs": 3, 00:35:22.592 "num_base_bdevs_discovered": 3, 00:35:22.592 "num_base_bdevs_operational": 3, 00:35:22.592 "process": { 00:35:22.592 "type": "rebuild", 00:35:22.592 "target": "spare", 00:35:22.592 "progress": { 00:35:22.592 "blocks": 30720, 00:35:22.592 "percent": 23 00:35:22.592 } 00:35:22.592 }, 00:35:22.592 "base_bdevs_list": [ 00:35:22.592 { 00:35:22.592 "name": "spare", 00:35:22.592 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:22.592 "is_configured": true, 00:35:22.592 "data_offset": 0, 00:35:22.592 "data_size": 65536 00:35:22.592 }, 00:35:22.592 { 00:35:22.592 "name": "BaseBdev2", 00:35:22.592 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:22.592 "is_configured": true, 00:35:22.592 "data_offset": 0, 00:35:22.592 "data_size": 65536 00:35:22.592 }, 00:35:22.592 { 00:35:22.592 "name": "BaseBdev3", 00:35:22.592 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:22.592 "is_configured": true, 00:35:22.592 "data_offset": 0, 00:35:22.592 "data_size": 65536 00:35:22.592 } 00:35:22.592 ] 00:35:22.592 }' 00:35:22.592 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:22.592 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:22.592 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:22.592 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:22.592 09:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:23.965 "name": "raid_bdev1", 00:35:23.965 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:23.965 "strip_size_kb": 64, 00:35:23.965 "state": "online", 00:35:23.965 "raid_level": "raid5f", 00:35:23.965 "superblock": false, 00:35:23.965 "num_base_bdevs": 3, 00:35:23.965 "num_base_bdevs_discovered": 3, 00:35:23.965 "num_base_bdevs_operational": 3, 00:35:23.965 "process": { 00:35:23.965 "type": "rebuild", 00:35:23.965 "target": "spare", 00:35:23.965 "progress": { 00:35:23.965 "blocks": 59392, 00:35:23.965 "percent": 45 00:35:23.965 } 00:35:23.965 }, 00:35:23.965 "base_bdevs_list": [ 00:35:23.965 { 00:35:23.965 "name": "spare", 00:35:23.965 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:23.965 "is_configured": true, 00:35:23.965 "data_offset": 0, 00:35:23.965 "data_size": 65536 00:35:23.965 }, 00:35:23.965 { 00:35:23.965 "name": "BaseBdev2", 00:35:23.965 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:23.965 "is_configured": true, 00:35:23.965 "data_offset": 0, 00:35:23.965 "data_size": 65536 00:35:23.965 }, 00:35:23.965 { 00:35:23.965 "name": "BaseBdev3", 00:35:23.965 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:23.965 "is_configured": true, 00:35:23.965 "data_offset": 0, 00:35:23.965 "data_size": 65536 00:35:23.965 } 00:35:23.965 ] 00:35:23.965 }' 00:35:23.965 09:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:23.965 09:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:23.965 09:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:23.965 09:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:23.965 09:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.915 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.172 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:25.172 "name": "raid_bdev1", 00:35:25.172 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:25.172 "strip_size_kb": 64, 00:35:25.172 "state": "online", 00:35:25.172 "raid_level": "raid5f", 00:35:25.172 "superblock": false, 00:35:25.172 "num_base_bdevs": 3, 00:35:25.172 "num_base_bdevs_discovered": 3, 00:35:25.172 "num_base_bdevs_operational": 3, 00:35:25.172 "process": { 00:35:25.172 "type": "rebuild", 00:35:25.172 "target": "spare", 00:35:25.172 "progress": { 00:35:25.172 "blocks": 86016, 00:35:25.172 "percent": 65 00:35:25.172 } 00:35:25.172 }, 00:35:25.172 "base_bdevs_list": [ 00:35:25.173 { 00:35:25.173 "name": "spare", 00:35:25.173 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:25.173 "is_configured": true, 00:35:25.173 "data_offset": 0, 00:35:25.173 "data_size": 65536 00:35:25.173 }, 00:35:25.173 { 00:35:25.173 "name": "BaseBdev2", 00:35:25.173 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:25.173 "is_configured": true, 00:35:25.173 "data_offset": 0, 00:35:25.173 "data_size": 65536 00:35:25.173 }, 00:35:25.173 { 00:35:25.173 "name": "BaseBdev3", 00:35:25.173 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:25.173 "is_configured": true, 00:35:25.173 "data_offset": 0, 00:35:25.173 "data_size": 65536 00:35:25.173 } 00:35:25.173 ] 00:35:25.173 }' 00:35:25.173 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:25.430 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:25.430 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:25.430 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:25.430 09:02:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.361 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.619 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:26.619 "name": "raid_bdev1", 00:35:26.619 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:26.619 "strip_size_kb": 64, 00:35:26.619 "state": "online", 00:35:26.619 "raid_level": "raid5f", 00:35:26.619 "superblock": false, 00:35:26.619 "num_base_bdevs": 3, 00:35:26.619 "num_base_bdevs_discovered": 3, 00:35:26.619 "num_base_bdevs_operational": 3, 00:35:26.619 "process": { 00:35:26.619 "type": "rebuild", 00:35:26.619 "target": "spare", 00:35:26.619 "progress": { 00:35:26.619 "blocks": 114688, 00:35:26.619 "percent": 87 00:35:26.619 } 00:35:26.619 }, 00:35:26.619 "base_bdevs_list": [ 00:35:26.619 { 00:35:26.619 "name": "spare", 00:35:26.619 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:26.619 "is_configured": true, 00:35:26.619 "data_offset": 0, 00:35:26.619 "data_size": 65536 00:35:26.619 }, 00:35:26.619 { 00:35:26.619 "name": "BaseBdev2", 00:35:26.619 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:26.619 "is_configured": true, 00:35:26.619 "data_offset": 0, 00:35:26.619 "data_size": 65536 00:35:26.619 }, 00:35:26.619 { 00:35:26.619 "name": "BaseBdev3", 00:35:26.619 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:26.619 "is_configured": true, 00:35:26.619 "data_offset": 0, 00:35:26.619 "data_size": 65536 00:35:26.619 } 00:35:26.619 ] 00:35:26.619 }' 00:35:26.619 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:26.619 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:26.619 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:26.619 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:26.619 09:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:27.563 [2024-07-12 09:02:02.438102] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:27.563 [2024-07-12 09:02:02.438172] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:27.563 [2024-07-12 09:02:02.438244] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.821 09:02:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:28.079 "name": "raid_bdev1", 00:35:28.079 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:28.079 "strip_size_kb": 64, 00:35:28.079 "state": "online", 00:35:28.079 "raid_level": "raid5f", 00:35:28.079 "superblock": false, 00:35:28.079 "num_base_bdevs": 3, 00:35:28.079 "num_base_bdevs_discovered": 3, 00:35:28.079 "num_base_bdevs_operational": 3, 00:35:28.079 "base_bdevs_list": [ 00:35:28.079 { 00:35:28.079 "name": "spare", 00:35:28.079 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:28.079 "is_configured": true, 00:35:28.079 "data_offset": 0, 00:35:28.079 "data_size": 65536 00:35:28.079 }, 00:35:28.079 { 00:35:28.079 "name": "BaseBdev2", 00:35:28.079 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:28.079 "is_configured": true, 00:35:28.079 "data_offset": 0, 00:35:28.079 "data_size": 65536 00:35:28.079 }, 00:35:28.079 { 00:35:28.079 "name": "BaseBdev3", 00:35:28.079 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:28.079 "is_configured": true, 00:35:28.079 "data_offset": 0, 00:35:28.079 "data_size": 65536 00:35:28.079 } 00:35:28.079 ] 00:35:28.079 }' 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.079 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:28.336 "name": "raid_bdev1", 00:35:28.336 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:28.336 "strip_size_kb": 64, 00:35:28.336 "state": "online", 00:35:28.336 "raid_level": "raid5f", 00:35:28.336 "superblock": false, 00:35:28.336 "num_base_bdevs": 3, 00:35:28.336 "num_base_bdevs_discovered": 3, 00:35:28.336 "num_base_bdevs_operational": 3, 00:35:28.336 "base_bdevs_list": [ 00:35:28.336 { 00:35:28.336 "name": "spare", 00:35:28.336 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:28.336 "is_configured": true, 00:35:28.336 "data_offset": 0, 00:35:28.336 "data_size": 65536 00:35:28.336 }, 00:35:28.336 { 00:35:28.336 "name": "BaseBdev2", 00:35:28.336 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:28.336 "is_configured": true, 00:35:28.336 "data_offset": 0, 00:35:28.336 "data_size": 65536 00:35:28.336 }, 00:35:28.336 { 00:35:28.336 "name": "BaseBdev3", 00:35:28.336 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:28.336 "is_configured": true, 00:35:28.336 "data_offset": 0, 00:35:28.336 "data_size": 65536 00:35:28.336 } 00:35:28.336 ] 00:35:28.336 }' 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.336 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.594 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:28.594 "name": "raid_bdev1", 00:35:28.594 "uuid": "7cc93714-7a79-4cdc-b30c-b1a1aa307882", 00:35:28.594 "strip_size_kb": 64, 00:35:28.594 "state": "online", 00:35:28.594 "raid_level": "raid5f", 00:35:28.594 "superblock": false, 00:35:28.594 "num_base_bdevs": 3, 00:35:28.594 "num_base_bdevs_discovered": 3, 00:35:28.594 "num_base_bdevs_operational": 3, 00:35:28.594 "base_bdevs_list": [ 00:35:28.594 { 00:35:28.594 "name": "spare", 00:35:28.594 "uuid": "2da1674b-2f62-55e0-9c1e-9de06c2ee1c4", 00:35:28.594 "is_configured": true, 00:35:28.594 "data_offset": 0, 00:35:28.594 "data_size": 65536 00:35:28.594 }, 00:35:28.594 { 00:35:28.594 "name": "BaseBdev2", 00:35:28.594 "uuid": "35b1a966-549e-52c3-ac33-7899d6b8f8f0", 00:35:28.594 "is_configured": true, 00:35:28.594 "data_offset": 0, 00:35:28.594 "data_size": 65536 00:35:28.594 }, 00:35:28.594 { 00:35:28.594 "name": "BaseBdev3", 00:35:28.594 "uuid": "e6cdd640-4932-5288-84e7-26e4033e2a4d", 00:35:28.594 "is_configured": true, 00:35:28.594 "data_offset": 0, 00:35:28.594 "data_size": 65536 00:35:28.594 } 00:35:28.594 ] 00:35:28.594 }' 00:35:28.594 09:02:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:28.594 09:02:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.565 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:29.565 [2024-07-12 09:02:04.636001] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:29.565 [2024-07-12 09:02:04.636029] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:29.565 [2024-07-12 09:02:04.636113] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:29.565 [2024-07-12 09:02:04.636201] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:29.565 [2024-07-12 09:02:04.636215] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:35:29.565 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:29.565 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:35:29.837 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:35:29.837 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:35:29.837 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:29.837 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:29.838 09:02:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:30.095 /dev/nbd0 00:35:30.095 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:30.095 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:30.096 1+0 records in 00:35:30.096 1+0 records out 00:35:30.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023297 s, 17.6 MB/s 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:30.096 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:30.354 /dev/nbd1 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:30.354 1+0 records in 00:35:30.354 1+0 records out 00:35:30.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269256 s, 15.2 MB/s 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:30.354 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:30.612 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:30.870 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:30.870 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:30.870 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:30.870 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:30.870 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:30.870 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:30.870 09:02:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 155379 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 155379 ']' 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 155379 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155379 00:35:31.129 killing process with pid 155379 00:35:31.129 Received shutdown signal, test time was about 60.000000 seconds 00:35:31.129 00:35:31.129 Latency(us) 00:35:31.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.129 =================================================================================================================== 00:35:31.129 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155379' 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 155379 00:35:31.129 09:02:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 155379 00:35:31.129 [2024-07-12 09:02:06.307252] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:31.386 [2024-07-12 09:02:06.570634] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:32.759 ************************************ 00:35:32.759 END TEST raid5f_rebuild_test 00:35:32.759 ************************************ 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:35:32.759 00:35:32.759 real 0m21.946s 00:35:32.759 user 0m33.304s 00:35:32.759 sys 0m2.491s 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.759 09:02:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:32.759 09:02:07 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:35:32.759 09:02:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:32.759 09:02:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:32.759 09:02:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:32.759 ************************************ 00:35:32.759 START TEST raid5f_rebuild_test_sb 00:35:32.759 ************************************ 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 true false true 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=155974 00:35:32.759 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 155974 /var/tmp/spdk-raid.sock 00:35:32.760 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:32.760 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 155974 ']' 00:35:32.760 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:32.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:32.760 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:32.760 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:32.760 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:32.760 09:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:32.760 [2024-07-12 09:02:07.703152] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:35:32.760 [2024-07-12 09:02:07.703325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155974 ] 00:35:32.760 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:32.760 Zero copy mechanism will not be used. 00:35:32.760 [2024-07-12 09:02:07.856638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.018 [2024-07-12 09:02:08.048548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.277 [2024-07-12 09:02:08.241423] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:33.535 09:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:33.535 09:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:35:33.535 09:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:33.535 09:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:33.792 BaseBdev1_malloc 00:35:33.793 09:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:34.051 [2024-07-12 09:02:09.199535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:34.051 [2024-07-12 09:02:09.199645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.051 [2024-07-12 09:02:09.199684] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:35:34.051 [2024-07-12 09:02:09.199703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.051 [2024-07-12 09:02:09.201739] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.051 [2024-07-12 09:02:09.201787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:34.051 BaseBdev1 00:35:34.051 09:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:34.051 09:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:34.309 BaseBdev2_malloc 00:35:34.309 09:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:34.567 [2024-07-12 09:02:09.630742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:34.567 [2024-07-12 09:02:09.630860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.567 [2024-07-12 09:02:09.630898] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:35:34.567 [2024-07-12 09:02:09.630917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.567 [2024-07-12 09:02:09.632838] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.567 [2024-07-12 09:02:09.632902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:34.567 BaseBdev2 00:35:34.567 09:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:34.567 09:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:34.825 BaseBdev3_malloc 00:35:34.825 09:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:35.083 [2024-07-12 09:02:10.099858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:35.083 [2024-07-12 09:02:10.099944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.083 [2024-07-12 09:02:10.099978] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:35.083 [2024-07-12 09:02:10.100003] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.083 [2024-07-12 09:02:10.102187] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.083 [2024-07-12 09:02:10.102241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:35.083 BaseBdev3 00:35:35.083 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:35.340 spare_malloc 00:35:35.340 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:35.598 spare_delay 00:35:35.598 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:35.598 [2024-07-12 09:02:10.748693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:35.598 [2024-07-12 09:02:10.748782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.598 [2024-07-12 09:02:10.748816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:35:35.598 [2024-07-12 09:02:10.748839] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.598 [2024-07-12 09:02:10.751067] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.598 [2024-07-12 09:02:10.751156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:35.598 spare 00:35:35.598 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:35:35.856 [2024-07-12 09:02:10.940783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:35.856 [2024-07-12 09:02:10.942480] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:35.856 [2024-07-12 09:02:10.942554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:35.856 [2024-07-12 09:02:10.942806] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:35:35.856 [2024-07-12 09:02:10.942856] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:35.856 [2024-07-12 09:02:10.942970] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:35:35.856 [2024-07-12 09:02:10.947059] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:35:35.856 [2024-07-12 09:02:10.947084] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:35:35.856 [2024-07-12 09:02:10.947259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.856 09:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.114 09:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:36.114 "name": "raid_bdev1", 00:35:36.114 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:36.114 "strip_size_kb": 64, 00:35:36.114 "state": "online", 00:35:36.114 "raid_level": "raid5f", 00:35:36.114 "superblock": true, 00:35:36.114 "num_base_bdevs": 3, 00:35:36.114 "num_base_bdevs_discovered": 3, 00:35:36.114 "num_base_bdevs_operational": 3, 00:35:36.114 "base_bdevs_list": [ 00:35:36.114 { 00:35:36.114 "name": "BaseBdev1", 00:35:36.114 "uuid": "df7ff60f-3bfa-5d1f-8c01-fa0e7d0c626b", 00:35:36.114 "is_configured": true, 00:35:36.114 "data_offset": 2048, 00:35:36.114 "data_size": 63488 00:35:36.114 }, 00:35:36.114 { 00:35:36.114 "name": "BaseBdev2", 00:35:36.114 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:36.114 "is_configured": true, 00:35:36.114 "data_offset": 2048, 00:35:36.114 "data_size": 63488 00:35:36.114 }, 00:35:36.114 { 00:35:36.114 "name": "BaseBdev3", 00:35:36.114 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:36.114 "is_configured": true, 00:35:36.114 "data_offset": 2048, 00:35:36.114 "data_size": 63488 00:35:36.114 } 00:35:36.114 ] 00:35:36.114 }' 00:35:36.114 09:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:36.114 09:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.680 09:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:36.680 09:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:36.937 [2024-07-12 09:02:12.032328] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:36.937 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:35:36.937 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.937 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:37.195 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:37.453 [2024-07-12 09:02:12.556292] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:37.453 /dev/nbd0 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:37.453 1+0 records in 00:35:37.453 1+0 records out 00:35:37.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385294 s, 10.6 MB/s 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:35:37.453 09:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:35:38.028 496+0 records in 00:35:38.028 496+0 records out 00:35:38.028 65011712 bytes (65 MB, 62 MiB) copied, 0.402852 s, 161 MB/s 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:38.028 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:38.286 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:38.286 [2024-07-12 09:02:13.227705] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:38.286 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:38.286 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:38.286 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:38.286 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:38.286 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:38.286 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:38.542 [2024-07-12 09:02:13.569575] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:38.542 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:38.542 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.543 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.801 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:38.801 "name": "raid_bdev1", 00:35:38.801 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:38.801 "strip_size_kb": 64, 00:35:38.801 "state": "online", 00:35:38.801 "raid_level": "raid5f", 00:35:38.801 "superblock": true, 00:35:38.801 "num_base_bdevs": 3, 00:35:38.801 "num_base_bdevs_discovered": 2, 00:35:38.801 "num_base_bdevs_operational": 2, 00:35:38.801 "base_bdevs_list": [ 00:35:38.801 { 00:35:38.801 "name": null, 00:35:38.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.801 "is_configured": false, 00:35:38.801 "data_offset": 2048, 00:35:38.801 "data_size": 63488 00:35:38.801 }, 00:35:38.801 { 00:35:38.801 "name": "BaseBdev2", 00:35:38.801 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:38.801 "is_configured": true, 00:35:38.801 "data_offset": 2048, 00:35:38.801 "data_size": 63488 00:35:38.801 }, 00:35:38.801 { 00:35:38.801 "name": "BaseBdev3", 00:35:38.801 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:38.801 "is_configured": true, 00:35:38.801 "data_offset": 2048, 00:35:38.801 "data_size": 63488 00:35:38.801 } 00:35:38.801 ] 00:35:38.801 }' 00:35:38.801 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:38.801 09:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.367 09:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:39.625 [2024-07-12 09:02:14.733792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:39.625 [2024-07-12 09:02:14.744487] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:35:39.625 [2024-07-12 09:02:14.749973] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:39.625 09:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:40.996 09:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:40.996 09:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:40.996 09:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:40.996 09:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:40.996 09:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:40.996 09:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.996 09:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.996 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:40.996 "name": "raid_bdev1", 00:35:40.996 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:40.996 "strip_size_kb": 64, 00:35:40.996 "state": "online", 00:35:40.996 "raid_level": "raid5f", 00:35:40.996 "superblock": true, 00:35:40.996 "num_base_bdevs": 3, 00:35:40.996 "num_base_bdevs_discovered": 3, 00:35:40.996 "num_base_bdevs_operational": 3, 00:35:40.996 "process": { 00:35:40.996 "type": "rebuild", 00:35:40.996 "target": "spare", 00:35:40.996 "progress": { 00:35:40.996 "blocks": 24576, 00:35:40.996 "percent": 19 00:35:40.996 } 00:35:40.996 }, 00:35:40.996 "base_bdevs_list": [ 00:35:40.996 { 00:35:40.996 "name": "spare", 00:35:40.996 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:40.996 "is_configured": true, 00:35:40.996 "data_offset": 2048, 00:35:40.996 "data_size": 63488 00:35:40.996 }, 00:35:40.996 { 00:35:40.996 "name": "BaseBdev2", 00:35:40.996 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:40.996 "is_configured": true, 00:35:40.996 "data_offset": 2048, 00:35:40.996 "data_size": 63488 00:35:40.996 }, 00:35:40.996 { 00:35:40.996 "name": "BaseBdev3", 00:35:40.996 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:40.996 "is_configured": true, 00:35:40.996 "data_offset": 2048, 00:35:40.996 "data_size": 63488 00:35:40.996 } 00:35:40.996 ] 00:35:40.996 }' 00:35:40.996 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:40.996 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:40.996 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:40.996 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:40.996 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:41.253 [2024-07-12 09:02:16.371485] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:41.509 [2024-07-12 09:02:16.464238] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:41.509 [2024-07-12 09:02:16.464328] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:41.509 [2024-07-12 09:02:16.464348] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:41.509 [2024-07-12 09:02:16.464356] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:41.509 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.766 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:41.766 "name": "raid_bdev1", 00:35:41.766 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:41.766 "strip_size_kb": 64, 00:35:41.766 "state": "online", 00:35:41.766 "raid_level": "raid5f", 00:35:41.766 "superblock": true, 00:35:41.766 "num_base_bdevs": 3, 00:35:41.766 "num_base_bdevs_discovered": 2, 00:35:41.766 "num_base_bdevs_operational": 2, 00:35:41.766 "base_bdevs_list": [ 00:35:41.766 { 00:35:41.766 "name": null, 00:35:41.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:41.766 "is_configured": false, 00:35:41.766 "data_offset": 2048, 00:35:41.766 "data_size": 63488 00:35:41.766 }, 00:35:41.766 { 00:35:41.766 "name": "BaseBdev2", 00:35:41.766 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:41.766 "is_configured": true, 00:35:41.766 "data_offset": 2048, 00:35:41.766 "data_size": 63488 00:35:41.766 }, 00:35:41.766 { 00:35:41.766 "name": "BaseBdev3", 00:35:41.766 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:41.766 "is_configured": true, 00:35:41.766 "data_offset": 2048, 00:35:41.766 "data_size": 63488 00:35:41.766 } 00:35:41.766 ] 00:35:41.766 }' 00:35:41.766 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:41.766 09:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:42.332 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:42.332 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:42.332 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:42.332 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:42.332 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:42.332 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.332 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.589 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:42.589 "name": "raid_bdev1", 00:35:42.589 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:42.589 "strip_size_kb": 64, 00:35:42.589 "state": "online", 00:35:42.589 "raid_level": "raid5f", 00:35:42.589 "superblock": true, 00:35:42.589 "num_base_bdevs": 3, 00:35:42.589 "num_base_bdevs_discovered": 2, 00:35:42.589 "num_base_bdevs_operational": 2, 00:35:42.589 "base_bdevs_list": [ 00:35:42.589 { 00:35:42.589 "name": null, 00:35:42.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:42.589 "is_configured": false, 00:35:42.589 "data_offset": 2048, 00:35:42.589 "data_size": 63488 00:35:42.589 }, 00:35:42.589 { 00:35:42.589 "name": "BaseBdev2", 00:35:42.589 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:42.589 "is_configured": true, 00:35:42.589 "data_offset": 2048, 00:35:42.589 "data_size": 63488 00:35:42.589 }, 00:35:42.589 { 00:35:42.589 "name": "BaseBdev3", 00:35:42.589 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:42.589 "is_configured": true, 00:35:42.589 "data_offset": 2048, 00:35:42.589 "data_size": 63488 00:35:42.589 } 00:35:42.589 ] 00:35:42.589 }' 00:35:42.589 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:42.589 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:42.589 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:42.589 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:42.589 09:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:42.846 [2024-07-12 09:02:18.020199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:42.846 [2024-07-12 09:02:18.030211] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:35:42.846 [2024-07-12 09:02:18.035724] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:42.846 09:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:44.220 "name": "raid_bdev1", 00:35:44.220 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:44.220 "strip_size_kb": 64, 00:35:44.220 "state": "online", 00:35:44.220 "raid_level": "raid5f", 00:35:44.220 "superblock": true, 00:35:44.220 "num_base_bdevs": 3, 00:35:44.220 "num_base_bdevs_discovered": 3, 00:35:44.220 "num_base_bdevs_operational": 3, 00:35:44.220 "process": { 00:35:44.220 "type": "rebuild", 00:35:44.220 "target": "spare", 00:35:44.220 "progress": { 00:35:44.220 "blocks": 24576, 00:35:44.220 "percent": 19 00:35:44.220 } 00:35:44.220 }, 00:35:44.220 "base_bdevs_list": [ 00:35:44.220 { 00:35:44.220 "name": "spare", 00:35:44.220 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:44.220 "is_configured": true, 00:35:44.220 "data_offset": 2048, 00:35:44.220 "data_size": 63488 00:35:44.220 }, 00:35:44.220 { 00:35:44.220 "name": "BaseBdev2", 00:35:44.220 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:44.220 "is_configured": true, 00:35:44.220 "data_offset": 2048, 00:35:44.220 "data_size": 63488 00:35:44.220 }, 00:35:44.220 { 00:35:44.220 "name": "BaseBdev3", 00:35:44.220 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:44.220 "is_configured": true, 00:35:44.220 "data_offset": 2048, 00:35:44.220 "data_size": 63488 00:35:44.220 } 00:35:44.220 ] 00:35:44.220 }' 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:35:44.220 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1238 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:44.220 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:44.478 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.478 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.478 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:44.478 "name": "raid_bdev1", 00:35:44.478 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:44.478 "strip_size_kb": 64, 00:35:44.478 "state": "online", 00:35:44.478 "raid_level": "raid5f", 00:35:44.478 "superblock": true, 00:35:44.478 "num_base_bdevs": 3, 00:35:44.478 "num_base_bdevs_discovered": 3, 00:35:44.478 "num_base_bdevs_operational": 3, 00:35:44.478 "process": { 00:35:44.478 "type": "rebuild", 00:35:44.478 "target": "spare", 00:35:44.478 "progress": { 00:35:44.478 "blocks": 30720, 00:35:44.478 "percent": 24 00:35:44.478 } 00:35:44.478 }, 00:35:44.478 "base_bdevs_list": [ 00:35:44.478 { 00:35:44.478 "name": "spare", 00:35:44.478 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:44.478 "is_configured": true, 00:35:44.478 "data_offset": 2048, 00:35:44.478 "data_size": 63488 00:35:44.478 }, 00:35:44.478 { 00:35:44.478 "name": "BaseBdev2", 00:35:44.478 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:44.478 "is_configured": true, 00:35:44.478 "data_offset": 2048, 00:35:44.478 "data_size": 63488 00:35:44.478 }, 00:35:44.478 { 00:35:44.478 "name": "BaseBdev3", 00:35:44.478 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:44.478 "is_configured": true, 00:35:44.478 "data_offset": 2048, 00:35:44.478 "data_size": 63488 00:35:44.478 } 00:35:44.478 ] 00:35:44.478 }' 00:35:44.478 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:44.737 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:44.737 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:44.737 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:44.737 09:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:45.670 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.929 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:45.929 "name": "raid_bdev1", 00:35:45.929 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:45.929 "strip_size_kb": 64, 00:35:45.929 "state": "online", 00:35:45.929 "raid_level": "raid5f", 00:35:45.929 "superblock": true, 00:35:45.929 "num_base_bdevs": 3, 00:35:45.929 "num_base_bdevs_discovered": 3, 00:35:45.929 "num_base_bdevs_operational": 3, 00:35:45.929 "process": { 00:35:45.929 "type": "rebuild", 00:35:45.929 "target": "spare", 00:35:45.929 "progress": { 00:35:45.929 "blocks": 59392, 00:35:45.929 "percent": 46 00:35:45.929 } 00:35:45.929 }, 00:35:45.929 "base_bdevs_list": [ 00:35:45.929 { 00:35:45.929 "name": "spare", 00:35:45.929 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:45.929 "is_configured": true, 00:35:45.929 "data_offset": 2048, 00:35:45.929 "data_size": 63488 00:35:45.929 }, 00:35:45.929 { 00:35:45.929 "name": "BaseBdev2", 00:35:45.929 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:45.929 "is_configured": true, 00:35:45.929 "data_offset": 2048, 00:35:45.929 "data_size": 63488 00:35:45.929 }, 00:35:45.929 { 00:35:45.929 "name": "BaseBdev3", 00:35:45.929 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:45.929 "is_configured": true, 00:35:45.929 "data_offset": 2048, 00:35:45.929 "data_size": 63488 00:35:45.929 } 00:35:45.929 ] 00:35:45.929 }' 00:35:45.929 09:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:45.929 09:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:45.929 09:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:45.929 09:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:45.929 09:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:47.300 "name": "raid_bdev1", 00:35:47.300 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:47.300 "strip_size_kb": 64, 00:35:47.300 "state": "online", 00:35:47.300 "raid_level": "raid5f", 00:35:47.300 "superblock": true, 00:35:47.300 "num_base_bdevs": 3, 00:35:47.300 "num_base_bdevs_discovered": 3, 00:35:47.300 "num_base_bdevs_operational": 3, 00:35:47.300 "process": { 00:35:47.300 "type": "rebuild", 00:35:47.300 "target": "spare", 00:35:47.300 "progress": { 00:35:47.300 "blocks": 86016, 00:35:47.300 "percent": 67 00:35:47.300 } 00:35:47.300 }, 00:35:47.300 "base_bdevs_list": [ 00:35:47.300 { 00:35:47.300 "name": "spare", 00:35:47.300 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:47.300 "is_configured": true, 00:35:47.300 "data_offset": 2048, 00:35:47.300 "data_size": 63488 00:35:47.300 }, 00:35:47.300 { 00:35:47.300 "name": "BaseBdev2", 00:35:47.300 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:47.300 "is_configured": true, 00:35:47.300 "data_offset": 2048, 00:35:47.300 "data_size": 63488 00:35:47.300 }, 00:35:47.300 { 00:35:47.300 "name": "BaseBdev3", 00:35:47.300 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:47.300 "is_configured": true, 00:35:47.300 "data_offset": 2048, 00:35:47.300 "data_size": 63488 00:35:47.300 } 00:35:47.300 ] 00:35:47.300 }' 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:47.300 09:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:48.675 "name": "raid_bdev1", 00:35:48.675 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:48.675 "strip_size_kb": 64, 00:35:48.675 "state": "online", 00:35:48.675 "raid_level": "raid5f", 00:35:48.675 "superblock": true, 00:35:48.675 "num_base_bdevs": 3, 00:35:48.675 "num_base_bdevs_discovered": 3, 00:35:48.675 "num_base_bdevs_operational": 3, 00:35:48.675 "process": { 00:35:48.675 "type": "rebuild", 00:35:48.675 "target": "spare", 00:35:48.675 "progress": { 00:35:48.675 "blocks": 112640, 00:35:48.675 "percent": 88 00:35:48.675 } 00:35:48.675 }, 00:35:48.675 "base_bdevs_list": [ 00:35:48.675 { 00:35:48.675 "name": "spare", 00:35:48.675 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:48.675 "is_configured": true, 00:35:48.675 "data_offset": 2048, 00:35:48.675 "data_size": 63488 00:35:48.675 }, 00:35:48.675 { 00:35:48.675 "name": "BaseBdev2", 00:35:48.675 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:48.675 "is_configured": true, 00:35:48.675 "data_offset": 2048, 00:35:48.675 "data_size": 63488 00:35:48.675 }, 00:35:48.675 { 00:35:48.675 "name": "BaseBdev3", 00:35:48.675 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:48.675 "is_configured": true, 00:35:48.675 "data_offset": 2048, 00:35:48.675 "data_size": 63488 00:35:48.675 } 00:35:48.675 ] 00:35:48.675 }' 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:48.675 09:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:49.241 [2024-07-12 09:02:24.286369] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:49.241 [2024-07-12 09:02:24.286466] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:49.241 [2024-07-12 09:02:24.286594] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:49.808 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:49.808 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:49.808 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:49.808 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:49.809 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:49.809 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:49.809 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.809 09:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.066 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:50.067 "name": "raid_bdev1", 00:35:50.067 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:50.067 "strip_size_kb": 64, 00:35:50.067 "state": "online", 00:35:50.067 "raid_level": "raid5f", 00:35:50.067 "superblock": true, 00:35:50.067 "num_base_bdevs": 3, 00:35:50.067 "num_base_bdevs_discovered": 3, 00:35:50.067 "num_base_bdevs_operational": 3, 00:35:50.067 "base_bdevs_list": [ 00:35:50.067 { 00:35:50.067 "name": "spare", 00:35:50.067 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:50.067 "is_configured": true, 00:35:50.067 "data_offset": 2048, 00:35:50.067 "data_size": 63488 00:35:50.067 }, 00:35:50.067 { 00:35:50.067 "name": "BaseBdev2", 00:35:50.067 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:50.067 "is_configured": true, 00:35:50.067 "data_offset": 2048, 00:35:50.067 "data_size": 63488 00:35:50.067 }, 00:35:50.067 { 00:35:50.067 "name": "BaseBdev3", 00:35:50.067 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:50.067 "is_configured": true, 00:35:50.067 "data_offset": 2048, 00:35:50.067 "data_size": 63488 00:35:50.067 } 00:35:50.067 ] 00:35:50.067 }' 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.067 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:50.326 "name": "raid_bdev1", 00:35:50.326 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:50.326 "strip_size_kb": 64, 00:35:50.326 "state": "online", 00:35:50.326 "raid_level": "raid5f", 00:35:50.326 "superblock": true, 00:35:50.326 "num_base_bdevs": 3, 00:35:50.326 "num_base_bdevs_discovered": 3, 00:35:50.326 "num_base_bdevs_operational": 3, 00:35:50.326 "base_bdevs_list": [ 00:35:50.326 { 00:35:50.326 "name": "spare", 00:35:50.326 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:50.326 "is_configured": true, 00:35:50.326 "data_offset": 2048, 00:35:50.326 "data_size": 63488 00:35:50.326 }, 00:35:50.326 { 00:35:50.326 "name": "BaseBdev2", 00:35:50.326 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:50.326 "is_configured": true, 00:35:50.326 "data_offset": 2048, 00:35:50.326 "data_size": 63488 00:35:50.326 }, 00:35:50.326 { 00:35:50.326 "name": "BaseBdev3", 00:35:50.326 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:50.326 "is_configured": true, 00:35:50.326 "data_offset": 2048, 00:35:50.326 "data_size": 63488 00:35:50.326 } 00:35:50.326 ] 00:35:50.326 }' 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.326 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.584 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:50.584 "name": "raid_bdev1", 00:35:50.584 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:50.584 "strip_size_kb": 64, 00:35:50.584 "state": "online", 00:35:50.584 "raid_level": "raid5f", 00:35:50.584 "superblock": true, 00:35:50.584 "num_base_bdevs": 3, 00:35:50.584 "num_base_bdevs_discovered": 3, 00:35:50.584 "num_base_bdevs_operational": 3, 00:35:50.584 "base_bdevs_list": [ 00:35:50.584 { 00:35:50.584 "name": "spare", 00:35:50.584 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:50.584 "is_configured": true, 00:35:50.584 "data_offset": 2048, 00:35:50.584 "data_size": 63488 00:35:50.584 }, 00:35:50.584 { 00:35:50.584 "name": "BaseBdev2", 00:35:50.584 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:50.584 "is_configured": true, 00:35:50.584 "data_offset": 2048, 00:35:50.584 "data_size": 63488 00:35:50.584 }, 00:35:50.584 { 00:35:50.584 "name": "BaseBdev3", 00:35:50.584 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:50.584 "is_configured": true, 00:35:50.584 "data_offset": 2048, 00:35:50.584 "data_size": 63488 00:35:50.584 } 00:35:50.584 ] 00:35:50.584 }' 00:35:50.584 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:50.584 09:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.516 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:51.516 [2024-07-12 09:02:26.606012] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:51.516 [2024-07-12 09:02:26.606048] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:51.516 [2024-07-12 09:02:26.606153] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:51.516 [2024-07-12 09:02:26.606250] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:51.516 [2024-07-12 09:02:26.606264] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:35:51.516 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.516 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:51.774 09:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:52.033 /dev/nbd0 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:52.033 1+0 records in 00:35:52.033 1+0 records out 00:35:52.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324941 s, 12.6 MB/s 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:52.033 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:52.291 /dev/nbd1 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:52.291 1+0 records in 00:35:52.291 1+0 records out 00:35:52.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398896 s, 10.3 MB/s 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:52.291 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:52.549 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:52.549 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:52.549 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:52.549 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:52.549 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:52.549 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:52.549 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:52.806 09:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:53.064 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:53.064 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:53.064 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:53.064 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:53.064 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.064 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:53.064 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:53.321 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:53.321 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.321 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:53.321 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:53.321 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:53.321 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:35:53.321 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:53.577 [2024-07-12 09:02:28.742925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:53.577 [2024-07-12 09:02:28.743006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:53.577 [2024-07-12 09:02:28.743062] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:53.577 [2024-07-12 09:02:28.743095] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:53.577 [2024-07-12 09:02:28.745479] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:53.577 [2024-07-12 09:02:28.745531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:53.577 [2024-07-12 09:02:28.745633] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:53.577 [2024-07-12 09:02:28.745691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:53.577 [2024-07-12 09:02:28.745845] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:53.577 [2024-07-12 09:02:28.745962] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:53.577 spare 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.577 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:53.834 [2024-07-12 09:02:28.846070] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:35:53.834 [2024-07-12 09:02:28.846093] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:53.834 [2024-07-12 09:02:28.846198] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bb40 00:35:53.834 [2024-07-12 09:02:28.850325] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:35:53.834 [2024-07-12 09:02:28.850348] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:35:53.834 [2024-07-12 09:02:28.850520] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:53.834 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:53.834 "name": "raid_bdev1", 00:35:53.834 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:53.834 "strip_size_kb": 64, 00:35:53.834 "state": "online", 00:35:53.834 "raid_level": "raid5f", 00:35:53.834 "superblock": true, 00:35:53.834 "num_base_bdevs": 3, 00:35:53.834 "num_base_bdevs_discovered": 3, 00:35:53.834 "num_base_bdevs_operational": 3, 00:35:53.834 "base_bdevs_list": [ 00:35:53.834 { 00:35:53.834 "name": "spare", 00:35:53.834 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:53.834 "is_configured": true, 00:35:53.834 "data_offset": 2048, 00:35:53.834 "data_size": 63488 00:35:53.834 }, 00:35:53.834 { 00:35:53.834 "name": "BaseBdev2", 00:35:53.834 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:53.834 "is_configured": true, 00:35:53.834 "data_offset": 2048, 00:35:53.834 "data_size": 63488 00:35:53.834 }, 00:35:53.834 { 00:35:53.834 "name": "BaseBdev3", 00:35:53.834 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:53.834 "is_configured": true, 00:35:53.834 "data_offset": 2048, 00:35:53.834 "data_size": 63488 00:35:53.834 } 00:35:53.834 ] 00:35:53.834 }' 00:35:53.834 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:53.835 09:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:54.498 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:54.498 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:54.498 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:54.498 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:54.498 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:54.498 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.498 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.787 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:54.787 "name": "raid_bdev1", 00:35:54.787 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:54.787 "strip_size_kb": 64, 00:35:54.787 "state": "online", 00:35:54.787 "raid_level": "raid5f", 00:35:54.787 "superblock": true, 00:35:54.787 "num_base_bdevs": 3, 00:35:54.787 "num_base_bdevs_discovered": 3, 00:35:54.787 "num_base_bdevs_operational": 3, 00:35:54.787 "base_bdevs_list": [ 00:35:54.787 { 00:35:54.787 "name": "spare", 00:35:54.787 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:54.787 "is_configured": true, 00:35:54.787 "data_offset": 2048, 00:35:54.787 "data_size": 63488 00:35:54.787 }, 00:35:54.787 { 00:35:54.787 "name": "BaseBdev2", 00:35:54.787 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:54.787 "is_configured": true, 00:35:54.787 "data_offset": 2048, 00:35:54.787 "data_size": 63488 00:35:54.787 }, 00:35:54.787 { 00:35:54.787 "name": "BaseBdev3", 00:35:54.787 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:54.787 "is_configured": true, 00:35:54.787 "data_offset": 2048, 00:35:54.787 "data_size": 63488 00:35:54.787 } 00:35:54.787 ] 00:35:54.787 }' 00:35:54.787 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:54.787 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:54.787 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:54.787 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:54.787 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.787 09:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:55.050 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:35:55.050 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:55.308 [2024-07-12 09:02:30.409851] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:55.308 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:55.566 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:55.566 "name": "raid_bdev1", 00:35:55.566 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:55.566 "strip_size_kb": 64, 00:35:55.566 "state": "online", 00:35:55.566 "raid_level": "raid5f", 00:35:55.566 "superblock": true, 00:35:55.566 "num_base_bdevs": 3, 00:35:55.566 "num_base_bdevs_discovered": 2, 00:35:55.566 "num_base_bdevs_operational": 2, 00:35:55.566 "base_bdevs_list": [ 00:35:55.566 { 00:35:55.566 "name": null, 00:35:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.566 "is_configured": false, 00:35:55.566 "data_offset": 2048, 00:35:55.566 "data_size": 63488 00:35:55.566 }, 00:35:55.566 { 00:35:55.566 "name": "BaseBdev2", 00:35:55.566 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:55.566 "is_configured": true, 00:35:55.566 "data_offset": 2048, 00:35:55.566 "data_size": 63488 00:35:55.566 }, 00:35:55.566 { 00:35:55.566 "name": "BaseBdev3", 00:35:55.566 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:55.566 "is_configured": true, 00:35:55.566 "data_offset": 2048, 00:35:55.566 "data_size": 63488 00:35:55.566 } 00:35:55.566 ] 00:35:55.566 }' 00:35:55.566 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:55.566 09:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:56.129 09:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:56.387 [2024-07-12 09:02:31.482031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:56.387 [2024-07-12 09:02:31.482205] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:56.387 [2024-07-12 09:02:31.482222] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:56.387 [2024-07-12 09:02:31.482318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:56.387 [2024-07-12 09:02:31.493176] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bce0 00:35:56.387 [2024-07-12 09:02:31.498786] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:56.387 09:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:35:57.320 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:57.320 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:57.320 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:57.320 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:57.320 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:57.320 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.320 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.578 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:57.578 "name": "raid_bdev1", 00:35:57.579 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:57.579 "strip_size_kb": 64, 00:35:57.579 "state": "online", 00:35:57.579 "raid_level": "raid5f", 00:35:57.579 "superblock": true, 00:35:57.579 "num_base_bdevs": 3, 00:35:57.579 "num_base_bdevs_discovered": 3, 00:35:57.579 "num_base_bdevs_operational": 3, 00:35:57.579 "process": { 00:35:57.579 "type": "rebuild", 00:35:57.579 "target": "spare", 00:35:57.579 "progress": { 00:35:57.579 "blocks": 24576, 00:35:57.579 "percent": 19 00:35:57.579 } 00:35:57.579 }, 00:35:57.579 "base_bdevs_list": [ 00:35:57.579 { 00:35:57.579 "name": "spare", 00:35:57.579 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:35:57.579 "is_configured": true, 00:35:57.579 "data_offset": 2048, 00:35:57.579 "data_size": 63488 00:35:57.579 }, 00:35:57.579 { 00:35:57.579 "name": "BaseBdev2", 00:35:57.579 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:57.579 "is_configured": true, 00:35:57.579 "data_offset": 2048, 00:35:57.579 "data_size": 63488 00:35:57.579 }, 00:35:57.579 { 00:35:57.579 "name": "BaseBdev3", 00:35:57.579 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:57.579 "is_configured": true, 00:35:57.579 "data_offset": 2048, 00:35:57.579 "data_size": 63488 00:35:57.579 } 00:35:57.579 ] 00:35:57.579 }' 00:35:57.579 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:57.836 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:57.836 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:57.836 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:57.836 09:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:58.092 [2024-07-12 09:02:33.096548] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:58.092 [2024-07-12 09:02:33.113587] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:58.092 [2024-07-12 09:02:33.113658] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:58.092 [2024-07-12 09:02:33.113678] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:58.092 [2024-07-12 09:02:33.113686] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.092 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.349 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:58.349 "name": "raid_bdev1", 00:35:58.349 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:35:58.349 "strip_size_kb": 64, 00:35:58.349 "state": "online", 00:35:58.349 "raid_level": "raid5f", 00:35:58.349 "superblock": true, 00:35:58.349 "num_base_bdevs": 3, 00:35:58.349 "num_base_bdevs_discovered": 2, 00:35:58.349 "num_base_bdevs_operational": 2, 00:35:58.349 "base_bdevs_list": [ 00:35:58.349 { 00:35:58.349 "name": null, 00:35:58.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.349 "is_configured": false, 00:35:58.349 "data_offset": 2048, 00:35:58.349 "data_size": 63488 00:35:58.349 }, 00:35:58.349 { 00:35:58.349 "name": "BaseBdev2", 00:35:58.349 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:35:58.349 "is_configured": true, 00:35:58.349 "data_offset": 2048, 00:35:58.349 "data_size": 63488 00:35:58.349 }, 00:35:58.349 { 00:35:58.349 "name": "BaseBdev3", 00:35:58.349 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:35:58.349 "is_configured": true, 00:35:58.349 "data_offset": 2048, 00:35:58.349 "data_size": 63488 00:35:58.349 } 00:35:58.349 ] 00:35:58.349 }' 00:35:58.349 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:58.349 09:02:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:58.914 09:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:59.171 [2024-07-12 09:02:34.314057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:59.171 [2024-07-12 09:02:34.314125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:59.171 [2024-07-12 09:02:34.314163] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:35:59.171 [2024-07-12 09:02:34.314192] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:59.171 [2024-07-12 09:02:34.314719] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:59.171 [2024-07-12 09:02:34.314759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:59.171 [2024-07-12 09:02:34.314865] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:59.171 [2024-07-12 09:02:34.314882] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:59.171 [2024-07-12 09:02:34.314890] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:59.171 [2024-07-12 09:02:34.314933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:59.171 [2024-07-12 09:02:34.324420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004c020 00:35:59.171 spare 00:35:59.171 [2024-07-12 09:02:34.329723] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:59.171 09:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:00.544 "name": "raid_bdev1", 00:36:00.544 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:36:00.544 "strip_size_kb": 64, 00:36:00.544 "state": "online", 00:36:00.544 "raid_level": "raid5f", 00:36:00.544 "superblock": true, 00:36:00.544 "num_base_bdevs": 3, 00:36:00.544 "num_base_bdevs_discovered": 3, 00:36:00.544 "num_base_bdevs_operational": 3, 00:36:00.544 "process": { 00:36:00.544 "type": "rebuild", 00:36:00.544 "target": "spare", 00:36:00.544 "progress": { 00:36:00.544 "blocks": 24576, 00:36:00.544 "percent": 19 00:36:00.544 } 00:36:00.544 }, 00:36:00.544 "base_bdevs_list": [ 00:36:00.544 { 00:36:00.544 "name": "spare", 00:36:00.544 "uuid": "4d86126b-2f7e-509c-bedf-a41d4870c9d1", 00:36:00.544 "is_configured": true, 00:36:00.544 "data_offset": 2048, 00:36:00.544 "data_size": 63488 00:36:00.544 }, 00:36:00.544 { 00:36:00.544 "name": "BaseBdev2", 00:36:00.544 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:36:00.544 "is_configured": true, 00:36:00.544 "data_offset": 2048, 00:36:00.544 "data_size": 63488 00:36:00.544 }, 00:36:00.544 { 00:36:00.544 "name": "BaseBdev3", 00:36:00.544 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:36:00.544 "is_configured": true, 00:36:00.544 "data_offset": 2048, 00:36:00.544 "data_size": 63488 00:36:00.544 } 00:36:00.544 ] 00:36:00.544 }' 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:00.544 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:00.802 [2024-07-12 09:02:35.915424] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:00.802 [2024-07-12 09:02:35.944487] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:00.802 [2024-07-12 09:02:35.944553] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:00.802 [2024-07-12 09:02:35.944571] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:00.802 [2024-07-12 09:02:35.944579] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.802 09:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.060 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:01.060 "name": "raid_bdev1", 00:36:01.060 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:36:01.060 "strip_size_kb": 64, 00:36:01.060 "state": "online", 00:36:01.060 "raid_level": "raid5f", 00:36:01.060 "superblock": true, 00:36:01.060 "num_base_bdevs": 3, 00:36:01.060 "num_base_bdevs_discovered": 2, 00:36:01.060 "num_base_bdevs_operational": 2, 00:36:01.060 "base_bdevs_list": [ 00:36:01.060 { 00:36:01.060 "name": null, 00:36:01.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.060 "is_configured": false, 00:36:01.060 "data_offset": 2048, 00:36:01.060 "data_size": 63488 00:36:01.060 }, 00:36:01.060 { 00:36:01.060 "name": "BaseBdev2", 00:36:01.060 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:36:01.060 "is_configured": true, 00:36:01.060 "data_offset": 2048, 00:36:01.060 "data_size": 63488 00:36:01.060 }, 00:36:01.060 { 00:36:01.060 "name": "BaseBdev3", 00:36:01.060 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:36:01.060 "is_configured": true, 00:36:01.060 "data_offset": 2048, 00:36:01.060 "data_size": 63488 00:36:01.060 } 00:36:01.060 ] 00:36:01.060 }' 00:36:01.060 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:01.060 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:01.993 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:01.993 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:01.993 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:01.993 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:01.993 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:01.993 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.993 09:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.993 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:01.993 "name": "raid_bdev1", 00:36:01.993 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:36:01.993 "strip_size_kb": 64, 00:36:01.993 "state": "online", 00:36:01.993 "raid_level": "raid5f", 00:36:01.993 "superblock": true, 00:36:01.993 "num_base_bdevs": 3, 00:36:01.993 "num_base_bdevs_discovered": 2, 00:36:01.993 "num_base_bdevs_operational": 2, 00:36:01.993 "base_bdevs_list": [ 00:36:01.993 { 00:36:01.993 "name": null, 00:36:01.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.993 "is_configured": false, 00:36:01.993 "data_offset": 2048, 00:36:01.993 "data_size": 63488 00:36:01.993 }, 00:36:01.993 { 00:36:01.993 "name": "BaseBdev2", 00:36:01.993 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:36:01.993 "is_configured": true, 00:36:01.993 "data_offset": 2048, 00:36:01.993 "data_size": 63488 00:36:01.993 }, 00:36:01.993 { 00:36:01.993 "name": "BaseBdev3", 00:36:01.993 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:36:01.993 "is_configured": true, 00:36:01.993 "data_offset": 2048, 00:36:01.993 "data_size": 63488 00:36:01.993 } 00:36:01.993 ] 00:36:01.993 }' 00:36:01.993 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:01.993 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:01.993 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:02.251 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:02.251 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:02.252 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:02.509 [2024-07-12 09:02:37.657889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:02.509 [2024-07-12 09:02:37.657959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.509 [2024-07-12 09:02:37.658002] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:36:02.509 [2024-07-12 09:02:37.658028] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.509 [2024-07-12 09:02:37.658482] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.509 [2024-07-12 09:02:37.658530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:02.509 [2024-07-12 09:02:37.658655] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:02.509 [2024-07-12 09:02:37.658672] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:02.510 [2024-07-12 09:02:37.658679] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:02.510 BaseBdev1 00:36:02.510 09:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:03.882 "name": "raid_bdev1", 00:36:03.882 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:36:03.882 "strip_size_kb": 64, 00:36:03.882 "state": "online", 00:36:03.882 "raid_level": "raid5f", 00:36:03.882 "superblock": true, 00:36:03.882 "num_base_bdevs": 3, 00:36:03.882 "num_base_bdevs_discovered": 2, 00:36:03.882 "num_base_bdevs_operational": 2, 00:36:03.882 "base_bdevs_list": [ 00:36:03.882 { 00:36:03.882 "name": null, 00:36:03.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.882 "is_configured": false, 00:36:03.882 "data_offset": 2048, 00:36:03.882 "data_size": 63488 00:36:03.882 }, 00:36:03.882 { 00:36:03.882 "name": "BaseBdev2", 00:36:03.882 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:36:03.882 "is_configured": true, 00:36:03.882 "data_offset": 2048, 00:36:03.882 "data_size": 63488 00:36:03.882 }, 00:36:03.882 { 00:36:03.882 "name": "BaseBdev3", 00:36:03.882 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:36:03.882 "is_configured": true, 00:36:03.882 "data_offset": 2048, 00:36:03.882 "data_size": 63488 00:36:03.882 } 00:36:03.882 ] 00:36:03.882 }' 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:03.882 09:02:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.448 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:04.448 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:04.448 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:04.448 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:04.448 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:04.448 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.448 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.707 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:04.707 "name": "raid_bdev1", 00:36:04.707 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:36:04.707 "strip_size_kb": 64, 00:36:04.707 "state": "online", 00:36:04.707 "raid_level": "raid5f", 00:36:04.707 "superblock": true, 00:36:04.707 "num_base_bdevs": 3, 00:36:04.707 "num_base_bdevs_discovered": 2, 00:36:04.707 "num_base_bdevs_operational": 2, 00:36:04.707 "base_bdevs_list": [ 00:36:04.707 { 00:36:04.707 "name": null, 00:36:04.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.707 "is_configured": false, 00:36:04.707 "data_offset": 2048, 00:36:04.707 "data_size": 63488 00:36:04.707 }, 00:36:04.707 { 00:36:04.707 "name": "BaseBdev2", 00:36:04.707 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:36:04.707 "is_configured": true, 00:36:04.707 "data_offset": 2048, 00:36:04.707 "data_size": 63488 00:36:04.707 }, 00:36:04.707 { 00:36:04.707 "name": "BaseBdev3", 00:36:04.707 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:36:04.707 "is_configured": true, 00:36:04.707 "data_offset": 2048, 00:36:04.707 "data_size": 63488 00:36:04.707 } 00:36:04.707 ] 00:36:04.707 }' 00:36:04.707 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:04.707 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:04.707 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:04.966 09:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:04.966 [2024-07-12 09:02:40.121674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:04.966 [2024-07-12 09:02:40.121778] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:04.966 [2024-07-12 09:02:40.121792] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:04.966 request: 00:36:04.966 { 00:36:04.966 "base_bdev": "BaseBdev1", 00:36:04.966 "raid_bdev": "raid_bdev1", 00:36:04.966 "method": "bdev_raid_add_base_bdev", 00:36:04.966 "req_id": 1 00:36:04.966 } 00:36:04.966 Got JSON-RPC error response 00:36:04.966 response: 00:36:04.966 { 00:36:04.966 "code": -22, 00:36:04.966 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:04.966 } 00:36:04.966 09:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:36:04.966 09:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:04.966 09:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:04.966 09:02:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:04.966 09:02:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:06.341 "name": "raid_bdev1", 00:36:06.341 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:36:06.341 "strip_size_kb": 64, 00:36:06.341 "state": "online", 00:36:06.341 "raid_level": "raid5f", 00:36:06.341 "superblock": true, 00:36:06.341 "num_base_bdevs": 3, 00:36:06.341 "num_base_bdevs_discovered": 2, 00:36:06.341 "num_base_bdevs_operational": 2, 00:36:06.341 "base_bdevs_list": [ 00:36:06.341 { 00:36:06.341 "name": null, 00:36:06.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:06.341 "is_configured": false, 00:36:06.341 "data_offset": 2048, 00:36:06.341 "data_size": 63488 00:36:06.341 }, 00:36:06.341 { 00:36:06.341 "name": "BaseBdev2", 00:36:06.341 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:36:06.341 "is_configured": true, 00:36:06.341 "data_offset": 2048, 00:36:06.341 "data_size": 63488 00:36:06.341 }, 00:36:06.341 { 00:36:06.341 "name": "BaseBdev3", 00:36:06.341 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:36:06.341 "is_configured": true, 00:36:06.341 "data_offset": 2048, 00:36:06.341 "data_size": 63488 00:36:06.341 } 00:36:06.341 ] 00:36:06.341 }' 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:06.341 09:02:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.915 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:06.915 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:06.915 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:06.915 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:06.915 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:06.915 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.915 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.173 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:07.173 "name": "raid_bdev1", 00:36:07.173 "uuid": "376fef2e-0cd9-4a11-a257-0c8175ed0ce2", 00:36:07.173 "strip_size_kb": 64, 00:36:07.173 "state": "online", 00:36:07.173 "raid_level": "raid5f", 00:36:07.173 "superblock": true, 00:36:07.173 "num_base_bdevs": 3, 00:36:07.173 "num_base_bdevs_discovered": 2, 00:36:07.173 "num_base_bdevs_operational": 2, 00:36:07.173 "base_bdevs_list": [ 00:36:07.173 { 00:36:07.173 "name": null, 00:36:07.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.173 "is_configured": false, 00:36:07.173 "data_offset": 2048, 00:36:07.173 "data_size": 63488 00:36:07.173 }, 00:36:07.173 { 00:36:07.173 "name": "BaseBdev2", 00:36:07.173 "uuid": "c38d1fef-dd7b-58c8-b7d4-532c40bbf0f5", 00:36:07.173 "is_configured": true, 00:36:07.173 "data_offset": 2048, 00:36:07.173 "data_size": 63488 00:36:07.173 }, 00:36:07.173 { 00:36:07.173 "name": "BaseBdev3", 00:36:07.173 "uuid": "e09e021e-a509-54ed-9055-425a0ab6e7df", 00:36:07.173 "is_configured": true, 00:36:07.173 "data_offset": 2048, 00:36:07.173 "data_size": 63488 00:36:07.173 } 00:36:07.173 ] 00:36:07.173 }' 00:36:07.173 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:07.173 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:07.173 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 155974 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 155974 ']' 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 155974 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155974 00:36:07.431 killing process with pid 155974 00:36:07.431 Received shutdown signal, test time was about 60.000000 seconds 00:36:07.431 00:36:07.431 Latency(us) 00:36:07.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.431 =================================================================================================================== 00:36:07.431 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155974' 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 155974 00:36:07.431 09:02:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 155974 00:36:07.431 [2024-07-12 09:02:42.393932] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:07.431 [2024-07-12 09:02:42.394029] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:07.431 [2024-07-12 09:02:42.394096] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:07.431 [2024-07-12 09:02:42.394116] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:36:07.690 [2024-07-12 09:02:42.658784] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:08.623 ************************************ 00:36:08.623 END TEST raid5f_rebuild_test_sb 00:36:08.623 ************************************ 00:36:08.623 09:02:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:36:08.623 00:36:08.623 real 0m36.031s 00:36:08.623 user 0m57.125s 00:36:08.623 sys 0m3.713s 00:36:08.623 09:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:08.623 09:02:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.623 09:02:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:08.623 09:02:43 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:36:08.623 09:02:43 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:36:08.623 09:02:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:36:08.623 09:02:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:08.623 09:02:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:08.623 ************************************ 00:36:08.623 START TEST raid5f_state_function_test 00:36:08.623 ************************************ 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 false 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:08.623 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:08.623 Process raid pid: 156992 00:36:08.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=156992 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 156992' 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 156992 /var/tmp/spdk-raid.sock 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 156992 ']' 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:08.624 09:02:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.624 [2024-07-12 09:02:43.793560] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:36:08.624 [2024-07-12 09:02:43.793735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:08.881 [2024-07-12 09:02:43.949368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.138 [2024-07-12 09:02:44.133965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.139 [2024-07-12 09:02:44.322068] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:09.705 09:02:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:09.705 09:02:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:36:09.705 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:09.963 [2024-07-12 09:02:44.984858] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:09.963 [2024-07-12 09:02:44.984958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:09.963 [2024-07-12 09:02:44.984973] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:09.963 [2024-07-12 09:02:44.984997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:09.963 [2024-07-12 09:02:44.985005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:09.963 [2024-07-12 09:02:44.985021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:09.963 [2024-07-12 09:02:44.985027] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:09.963 [2024-07-12 09:02:44.985048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.963 09:02:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:10.221 09:02:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:10.221 "name": "Existed_Raid", 00:36:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.221 "strip_size_kb": 64, 00:36:10.221 "state": "configuring", 00:36:10.221 "raid_level": "raid5f", 00:36:10.221 "superblock": false, 00:36:10.221 "num_base_bdevs": 4, 00:36:10.221 "num_base_bdevs_discovered": 0, 00:36:10.221 "num_base_bdevs_operational": 4, 00:36:10.221 "base_bdevs_list": [ 00:36:10.221 { 00:36:10.221 "name": "BaseBdev1", 00:36:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.221 "is_configured": false, 00:36:10.221 "data_offset": 0, 00:36:10.221 "data_size": 0 00:36:10.221 }, 00:36:10.221 { 00:36:10.221 "name": "BaseBdev2", 00:36:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.221 "is_configured": false, 00:36:10.221 "data_offset": 0, 00:36:10.221 "data_size": 0 00:36:10.221 }, 00:36:10.221 { 00:36:10.221 "name": "BaseBdev3", 00:36:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.221 "is_configured": false, 00:36:10.221 "data_offset": 0, 00:36:10.221 "data_size": 0 00:36:10.221 }, 00:36:10.221 { 00:36:10.221 "name": "BaseBdev4", 00:36:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.221 "is_configured": false, 00:36:10.221 "data_offset": 0, 00:36:10.221 "data_size": 0 00:36:10.221 } 00:36:10.221 ] 00:36:10.221 }' 00:36:10.221 09:02:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:10.221 09:02:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:10.787 09:02:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:11.045 [2024-07-12 09:02:46.072901] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:11.045 [2024-07-12 09:02:46.072932] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:36:11.045 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:11.303 [2024-07-12 09:02:46.324938] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:11.303 [2024-07-12 09:02:46.324985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:11.303 [2024-07-12 09:02:46.324995] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:11.303 [2024-07-12 09:02:46.325041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:11.303 [2024-07-12 09:02:46.325050] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:11.303 [2024-07-12 09:02:46.325080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:11.303 [2024-07-12 09:02:46.325088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:11.303 [2024-07-12 09:02:46.325109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:11.303 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:11.562 [2024-07-12 09:02:46.550204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:11.562 BaseBdev1 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:11.562 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:11.820 [ 00:36:11.820 { 00:36:11.820 "name": "BaseBdev1", 00:36:11.820 "aliases": [ 00:36:11.820 "5b78f237-a2ae-4700-b03d-2d4bf032c08c" 00:36:11.820 ], 00:36:11.820 "product_name": "Malloc disk", 00:36:11.820 "block_size": 512, 00:36:11.820 "num_blocks": 65536, 00:36:11.820 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:11.820 "assigned_rate_limits": { 00:36:11.820 "rw_ios_per_sec": 0, 00:36:11.820 "rw_mbytes_per_sec": 0, 00:36:11.820 "r_mbytes_per_sec": 0, 00:36:11.820 "w_mbytes_per_sec": 0 00:36:11.820 }, 00:36:11.820 "claimed": true, 00:36:11.820 "claim_type": "exclusive_write", 00:36:11.820 "zoned": false, 00:36:11.820 "supported_io_types": { 00:36:11.820 "read": true, 00:36:11.820 "write": true, 00:36:11.820 "unmap": true, 00:36:11.820 "flush": true, 00:36:11.820 "reset": true, 00:36:11.820 "nvme_admin": false, 00:36:11.820 "nvme_io": false, 00:36:11.820 "nvme_io_md": false, 00:36:11.820 "write_zeroes": true, 00:36:11.820 "zcopy": true, 00:36:11.820 "get_zone_info": false, 00:36:11.820 "zone_management": false, 00:36:11.820 "zone_append": false, 00:36:11.820 "compare": false, 00:36:11.820 "compare_and_write": false, 00:36:11.820 "abort": true, 00:36:11.820 "seek_hole": false, 00:36:11.820 "seek_data": false, 00:36:11.820 "copy": true, 00:36:11.820 "nvme_iov_md": false 00:36:11.820 }, 00:36:11.820 "memory_domains": [ 00:36:11.820 { 00:36:11.820 "dma_device_id": "system", 00:36:11.820 "dma_device_type": 1 00:36:11.820 }, 00:36:11.820 { 00:36:11.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:11.820 "dma_device_type": 2 00:36:11.820 } 00:36:11.820 ], 00:36:11.820 "driver_specific": {} 00:36:11.820 } 00:36:11.820 ] 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.820 09:02:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:12.078 09:02:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:12.078 "name": "Existed_Raid", 00:36:12.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.078 "strip_size_kb": 64, 00:36:12.078 "state": "configuring", 00:36:12.078 "raid_level": "raid5f", 00:36:12.078 "superblock": false, 00:36:12.078 "num_base_bdevs": 4, 00:36:12.078 "num_base_bdevs_discovered": 1, 00:36:12.078 "num_base_bdevs_operational": 4, 00:36:12.078 "base_bdevs_list": [ 00:36:12.078 { 00:36:12.078 "name": "BaseBdev1", 00:36:12.078 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:12.078 "is_configured": true, 00:36:12.078 "data_offset": 0, 00:36:12.078 "data_size": 65536 00:36:12.078 }, 00:36:12.078 { 00:36:12.078 "name": "BaseBdev2", 00:36:12.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.078 "is_configured": false, 00:36:12.078 "data_offset": 0, 00:36:12.078 "data_size": 0 00:36:12.078 }, 00:36:12.078 { 00:36:12.078 "name": "BaseBdev3", 00:36:12.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.078 "is_configured": false, 00:36:12.078 "data_offset": 0, 00:36:12.078 "data_size": 0 00:36:12.078 }, 00:36:12.078 { 00:36:12.078 "name": "BaseBdev4", 00:36:12.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.078 "is_configured": false, 00:36:12.078 "data_offset": 0, 00:36:12.078 "data_size": 0 00:36:12.078 } 00:36:12.078 ] 00:36:12.078 }' 00:36:12.078 09:02:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:12.078 09:02:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.646 09:02:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:12.905 [2024-07-12 09:02:47.954477] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:12.905 [2024-07-12 09:02:47.954528] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:36:12.905 09:02:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:13.163 [2024-07-12 09:02:48.202528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:13.163 [2024-07-12 09:02:48.204394] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:13.163 [2024-07-12 09:02:48.204446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:13.163 [2024-07-12 09:02:48.204458] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:13.163 [2024-07-12 09:02:48.204482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:13.163 [2024-07-12 09:02:48.204491] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:13.163 [2024-07-12 09:02:48.204518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.163 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:13.422 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:13.422 "name": "Existed_Raid", 00:36:13.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.422 "strip_size_kb": 64, 00:36:13.422 "state": "configuring", 00:36:13.422 "raid_level": "raid5f", 00:36:13.422 "superblock": false, 00:36:13.422 "num_base_bdevs": 4, 00:36:13.422 "num_base_bdevs_discovered": 1, 00:36:13.422 "num_base_bdevs_operational": 4, 00:36:13.422 "base_bdevs_list": [ 00:36:13.422 { 00:36:13.422 "name": "BaseBdev1", 00:36:13.422 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:13.422 "is_configured": true, 00:36:13.422 "data_offset": 0, 00:36:13.422 "data_size": 65536 00:36:13.422 }, 00:36:13.422 { 00:36:13.422 "name": "BaseBdev2", 00:36:13.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.422 "is_configured": false, 00:36:13.422 "data_offset": 0, 00:36:13.422 "data_size": 0 00:36:13.422 }, 00:36:13.422 { 00:36:13.422 "name": "BaseBdev3", 00:36:13.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.422 "is_configured": false, 00:36:13.422 "data_offset": 0, 00:36:13.422 "data_size": 0 00:36:13.422 }, 00:36:13.422 { 00:36:13.422 "name": "BaseBdev4", 00:36:13.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.422 "is_configured": false, 00:36:13.422 "data_offset": 0, 00:36:13.422 "data_size": 0 00:36:13.422 } 00:36:13.422 ] 00:36:13.422 }' 00:36:13.422 09:02:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:13.422 09:02:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.990 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:14.249 [2024-07-12 09:02:49.383532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:14.249 BaseBdev2 00:36:14.249 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:14.249 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:36:14.249 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:14.249 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:14.249 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:14.249 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:14.249 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:14.508 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:14.767 [ 00:36:14.767 { 00:36:14.767 "name": "BaseBdev2", 00:36:14.767 "aliases": [ 00:36:14.767 "fd25d1a3-fdbe-406a-93ce-1580f252aa4c" 00:36:14.767 ], 00:36:14.767 "product_name": "Malloc disk", 00:36:14.767 "block_size": 512, 00:36:14.767 "num_blocks": 65536, 00:36:14.767 "uuid": "fd25d1a3-fdbe-406a-93ce-1580f252aa4c", 00:36:14.767 "assigned_rate_limits": { 00:36:14.767 "rw_ios_per_sec": 0, 00:36:14.767 "rw_mbytes_per_sec": 0, 00:36:14.767 "r_mbytes_per_sec": 0, 00:36:14.767 "w_mbytes_per_sec": 0 00:36:14.767 }, 00:36:14.767 "claimed": true, 00:36:14.767 "claim_type": "exclusive_write", 00:36:14.767 "zoned": false, 00:36:14.767 "supported_io_types": { 00:36:14.767 "read": true, 00:36:14.767 "write": true, 00:36:14.767 "unmap": true, 00:36:14.767 "flush": true, 00:36:14.767 "reset": true, 00:36:14.767 "nvme_admin": false, 00:36:14.767 "nvme_io": false, 00:36:14.767 "nvme_io_md": false, 00:36:14.767 "write_zeroes": true, 00:36:14.767 "zcopy": true, 00:36:14.767 "get_zone_info": false, 00:36:14.767 "zone_management": false, 00:36:14.767 "zone_append": false, 00:36:14.767 "compare": false, 00:36:14.767 "compare_and_write": false, 00:36:14.767 "abort": true, 00:36:14.767 "seek_hole": false, 00:36:14.767 "seek_data": false, 00:36:14.767 "copy": true, 00:36:14.767 "nvme_iov_md": false 00:36:14.767 }, 00:36:14.767 "memory_domains": [ 00:36:14.767 { 00:36:14.767 "dma_device_id": "system", 00:36:14.767 "dma_device_type": 1 00:36:14.767 }, 00:36:14.767 { 00:36:14.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:14.767 "dma_device_type": 2 00:36:14.767 } 00:36:14.767 ], 00:36:14.767 "driver_specific": {} 00:36:14.767 } 00:36:14.767 ] 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.767 09:02:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:15.026 09:02:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:15.026 "name": "Existed_Raid", 00:36:15.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.026 "strip_size_kb": 64, 00:36:15.026 "state": "configuring", 00:36:15.026 "raid_level": "raid5f", 00:36:15.026 "superblock": false, 00:36:15.026 "num_base_bdevs": 4, 00:36:15.026 "num_base_bdevs_discovered": 2, 00:36:15.026 "num_base_bdevs_operational": 4, 00:36:15.026 "base_bdevs_list": [ 00:36:15.026 { 00:36:15.026 "name": "BaseBdev1", 00:36:15.026 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:15.026 "is_configured": true, 00:36:15.026 "data_offset": 0, 00:36:15.026 "data_size": 65536 00:36:15.026 }, 00:36:15.026 { 00:36:15.026 "name": "BaseBdev2", 00:36:15.026 "uuid": "fd25d1a3-fdbe-406a-93ce-1580f252aa4c", 00:36:15.026 "is_configured": true, 00:36:15.026 "data_offset": 0, 00:36:15.026 "data_size": 65536 00:36:15.026 }, 00:36:15.026 { 00:36:15.026 "name": "BaseBdev3", 00:36:15.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.026 "is_configured": false, 00:36:15.026 "data_offset": 0, 00:36:15.026 "data_size": 0 00:36:15.026 }, 00:36:15.026 { 00:36:15.026 "name": "BaseBdev4", 00:36:15.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.026 "is_configured": false, 00:36:15.026 "data_offset": 0, 00:36:15.026 "data_size": 0 00:36:15.026 } 00:36:15.026 ] 00:36:15.026 }' 00:36:15.026 09:02:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:15.026 09:02:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.591 09:02:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:15.850 [2024-07-12 09:02:50.885210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:15.850 BaseBdev3 00:36:15.850 09:02:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:36:15.850 09:02:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:36:15.850 09:02:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:15.850 09:02:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:15.850 09:02:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:15.850 09:02:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:15.850 09:02:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:16.109 09:02:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:16.367 [ 00:36:16.367 { 00:36:16.367 "name": "BaseBdev3", 00:36:16.367 "aliases": [ 00:36:16.367 "e6548c3b-359a-437f-b6a0-660b0a6678cc" 00:36:16.367 ], 00:36:16.367 "product_name": "Malloc disk", 00:36:16.367 "block_size": 512, 00:36:16.367 "num_blocks": 65536, 00:36:16.367 "uuid": "e6548c3b-359a-437f-b6a0-660b0a6678cc", 00:36:16.367 "assigned_rate_limits": { 00:36:16.367 "rw_ios_per_sec": 0, 00:36:16.367 "rw_mbytes_per_sec": 0, 00:36:16.367 "r_mbytes_per_sec": 0, 00:36:16.367 "w_mbytes_per_sec": 0 00:36:16.367 }, 00:36:16.367 "claimed": true, 00:36:16.367 "claim_type": "exclusive_write", 00:36:16.367 "zoned": false, 00:36:16.367 "supported_io_types": { 00:36:16.367 "read": true, 00:36:16.367 "write": true, 00:36:16.367 "unmap": true, 00:36:16.367 "flush": true, 00:36:16.367 "reset": true, 00:36:16.367 "nvme_admin": false, 00:36:16.367 "nvme_io": false, 00:36:16.367 "nvme_io_md": false, 00:36:16.367 "write_zeroes": true, 00:36:16.367 "zcopy": true, 00:36:16.367 "get_zone_info": false, 00:36:16.367 "zone_management": false, 00:36:16.367 "zone_append": false, 00:36:16.367 "compare": false, 00:36:16.367 "compare_and_write": false, 00:36:16.367 "abort": true, 00:36:16.367 "seek_hole": false, 00:36:16.367 "seek_data": false, 00:36:16.367 "copy": true, 00:36:16.367 "nvme_iov_md": false 00:36:16.367 }, 00:36:16.367 "memory_domains": [ 00:36:16.367 { 00:36:16.367 "dma_device_id": "system", 00:36:16.367 "dma_device_type": 1 00:36:16.367 }, 00:36:16.367 { 00:36:16.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.367 "dma_device_type": 2 00:36:16.367 } 00:36:16.367 ], 00:36:16.367 "driver_specific": {} 00:36:16.367 } 00:36:16.367 ] 00:36:16.367 09:02:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:16.367 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:16.368 "name": "Existed_Raid", 00:36:16.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.368 "strip_size_kb": 64, 00:36:16.368 "state": "configuring", 00:36:16.368 "raid_level": "raid5f", 00:36:16.368 "superblock": false, 00:36:16.368 "num_base_bdevs": 4, 00:36:16.368 "num_base_bdevs_discovered": 3, 00:36:16.368 "num_base_bdevs_operational": 4, 00:36:16.368 "base_bdevs_list": [ 00:36:16.368 { 00:36:16.368 "name": "BaseBdev1", 00:36:16.368 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:16.368 "is_configured": true, 00:36:16.368 "data_offset": 0, 00:36:16.368 "data_size": 65536 00:36:16.368 }, 00:36:16.368 { 00:36:16.368 "name": "BaseBdev2", 00:36:16.368 "uuid": "fd25d1a3-fdbe-406a-93ce-1580f252aa4c", 00:36:16.368 "is_configured": true, 00:36:16.368 "data_offset": 0, 00:36:16.368 "data_size": 65536 00:36:16.368 }, 00:36:16.368 { 00:36:16.368 "name": "BaseBdev3", 00:36:16.368 "uuid": "e6548c3b-359a-437f-b6a0-660b0a6678cc", 00:36:16.368 "is_configured": true, 00:36:16.368 "data_offset": 0, 00:36:16.368 "data_size": 65536 00:36:16.368 }, 00:36:16.368 { 00:36:16.368 "name": "BaseBdev4", 00:36:16.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.368 "is_configured": false, 00:36:16.368 "data_offset": 0, 00:36:16.368 "data_size": 0 00:36:16.368 } 00:36:16.368 ] 00:36:16.368 }' 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:16.368 09:02:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:36:17.303 [2024-07-12 09:02:52.425010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:17.303 [2024-07-12 09:02:52.425076] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:36:17.303 [2024-07-12 09:02:52.425087] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:17.303 [2024-07-12 09:02:52.425207] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:36:17.303 [2024-07-12 09:02:52.430774] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:36:17.303 [2024-07-12 09:02:52.430799] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:36:17.303 BaseBdev4 00:36:17.303 [2024-07-12 09:02:52.431098] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:17.303 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:17.561 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:17.818 [ 00:36:17.818 { 00:36:17.818 "name": "BaseBdev4", 00:36:17.818 "aliases": [ 00:36:17.818 "0e910e6d-f444-40c5-8dd8-279eb4cb3389" 00:36:17.818 ], 00:36:17.818 "product_name": "Malloc disk", 00:36:17.818 "block_size": 512, 00:36:17.818 "num_blocks": 65536, 00:36:17.818 "uuid": "0e910e6d-f444-40c5-8dd8-279eb4cb3389", 00:36:17.818 "assigned_rate_limits": { 00:36:17.818 "rw_ios_per_sec": 0, 00:36:17.818 "rw_mbytes_per_sec": 0, 00:36:17.818 "r_mbytes_per_sec": 0, 00:36:17.818 "w_mbytes_per_sec": 0 00:36:17.818 }, 00:36:17.818 "claimed": true, 00:36:17.818 "claim_type": "exclusive_write", 00:36:17.818 "zoned": false, 00:36:17.818 "supported_io_types": { 00:36:17.818 "read": true, 00:36:17.818 "write": true, 00:36:17.818 "unmap": true, 00:36:17.818 "flush": true, 00:36:17.818 "reset": true, 00:36:17.818 "nvme_admin": false, 00:36:17.818 "nvme_io": false, 00:36:17.818 "nvme_io_md": false, 00:36:17.818 "write_zeroes": true, 00:36:17.818 "zcopy": true, 00:36:17.818 "get_zone_info": false, 00:36:17.818 "zone_management": false, 00:36:17.818 "zone_append": false, 00:36:17.818 "compare": false, 00:36:17.818 "compare_and_write": false, 00:36:17.818 "abort": true, 00:36:17.818 "seek_hole": false, 00:36:17.818 "seek_data": false, 00:36:17.818 "copy": true, 00:36:17.818 "nvme_iov_md": false 00:36:17.818 }, 00:36:17.818 "memory_domains": [ 00:36:17.818 { 00:36:17.818 "dma_device_id": "system", 00:36:17.818 "dma_device_type": 1 00:36:17.818 }, 00:36:17.818 { 00:36:17.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.818 "dma_device_type": 2 00:36:17.818 } 00:36:17.818 ], 00:36:17.818 "driver_specific": {} 00:36:17.818 } 00:36:17.818 ] 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:17.818 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:17.819 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:17.819 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:17.819 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:17.819 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.819 09:02:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:18.076 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:18.076 "name": "Existed_Raid", 00:36:18.076 "uuid": "27a4d915-e7da-4abb-b71c-0cbd6c63b69e", 00:36:18.076 "strip_size_kb": 64, 00:36:18.076 "state": "online", 00:36:18.076 "raid_level": "raid5f", 00:36:18.076 "superblock": false, 00:36:18.076 "num_base_bdevs": 4, 00:36:18.076 "num_base_bdevs_discovered": 4, 00:36:18.076 "num_base_bdevs_operational": 4, 00:36:18.076 "base_bdevs_list": [ 00:36:18.076 { 00:36:18.076 "name": "BaseBdev1", 00:36:18.076 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:18.076 "is_configured": true, 00:36:18.076 "data_offset": 0, 00:36:18.076 "data_size": 65536 00:36:18.076 }, 00:36:18.076 { 00:36:18.076 "name": "BaseBdev2", 00:36:18.076 "uuid": "fd25d1a3-fdbe-406a-93ce-1580f252aa4c", 00:36:18.076 "is_configured": true, 00:36:18.076 "data_offset": 0, 00:36:18.076 "data_size": 65536 00:36:18.076 }, 00:36:18.076 { 00:36:18.076 "name": "BaseBdev3", 00:36:18.076 "uuid": "e6548c3b-359a-437f-b6a0-660b0a6678cc", 00:36:18.076 "is_configured": true, 00:36:18.076 "data_offset": 0, 00:36:18.076 "data_size": 65536 00:36:18.076 }, 00:36:18.076 { 00:36:18.076 "name": "BaseBdev4", 00:36:18.076 "uuid": "0e910e6d-f444-40c5-8dd8-279eb4cb3389", 00:36:18.076 "is_configured": true, 00:36:18.076 "data_offset": 0, 00:36:18.076 "data_size": 65536 00:36:18.076 } 00:36:18.076 ] 00:36:18.076 }' 00:36:18.076 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:18.076 09:02:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:18.641 09:02:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:18.899 [2024-07-12 09:02:54.081123] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:19.158 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:19.158 "name": "Existed_Raid", 00:36:19.158 "aliases": [ 00:36:19.158 "27a4d915-e7da-4abb-b71c-0cbd6c63b69e" 00:36:19.158 ], 00:36:19.158 "product_name": "Raid Volume", 00:36:19.158 "block_size": 512, 00:36:19.158 "num_blocks": 196608, 00:36:19.158 "uuid": "27a4d915-e7da-4abb-b71c-0cbd6c63b69e", 00:36:19.158 "assigned_rate_limits": { 00:36:19.158 "rw_ios_per_sec": 0, 00:36:19.158 "rw_mbytes_per_sec": 0, 00:36:19.158 "r_mbytes_per_sec": 0, 00:36:19.158 "w_mbytes_per_sec": 0 00:36:19.158 }, 00:36:19.158 "claimed": false, 00:36:19.158 "zoned": false, 00:36:19.158 "supported_io_types": { 00:36:19.158 "read": true, 00:36:19.158 "write": true, 00:36:19.158 "unmap": false, 00:36:19.158 "flush": false, 00:36:19.158 "reset": true, 00:36:19.158 "nvme_admin": false, 00:36:19.158 "nvme_io": false, 00:36:19.158 "nvme_io_md": false, 00:36:19.158 "write_zeroes": true, 00:36:19.158 "zcopy": false, 00:36:19.158 "get_zone_info": false, 00:36:19.158 "zone_management": false, 00:36:19.158 "zone_append": false, 00:36:19.158 "compare": false, 00:36:19.158 "compare_and_write": false, 00:36:19.158 "abort": false, 00:36:19.158 "seek_hole": false, 00:36:19.158 "seek_data": false, 00:36:19.158 "copy": false, 00:36:19.158 "nvme_iov_md": false 00:36:19.158 }, 00:36:19.158 "driver_specific": { 00:36:19.158 "raid": { 00:36:19.158 "uuid": "27a4d915-e7da-4abb-b71c-0cbd6c63b69e", 00:36:19.158 "strip_size_kb": 64, 00:36:19.158 "state": "online", 00:36:19.158 "raid_level": "raid5f", 00:36:19.158 "superblock": false, 00:36:19.158 "num_base_bdevs": 4, 00:36:19.158 "num_base_bdevs_discovered": 4, 00:36:19.158 "num_base_bdevs_operational": 4, 00:36:19.158 "base_bdevs_list": [ 00:36:19.158 { 00:36:19.158 "name": "BaseBdev1", 00:36:19.158 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:19.158 "is_configured": true, 00:36:19.158 "data_offset": 0, 00:36:19.158 "data_size": 65536 00:36:19.158 }, 00:36:19.158 { 00:36:19.158 "name": "BaseBdev2", 00:36:19.158 "uuid": "fd25d1a3-fdbe-406a-93ce-1580f252aa4c", 00:36:19.158 "is_configured": true, 00:36:19.158 "data_offset": 0, 00:36:19.158 "data_size": 65536 00:36:19.158 }, 00:36:19.158 { 00:36:19.158 "name": "BaseBdev3", 00:36:19.158 "uuid": "e6548c3b-359a-437f-b6a0-660b0a6678cc", 00:36:19.158 "is_configured": true, 00:36:19.158 "data_offset": 0, 00:36:19.158 "data_size": 65536 00:36:19.158 }, 00:36:19.158 { 00:36:19.158 "name": "BaseBdev4", 00:36:19.158 "uuid": "0e910e6d-f444-40c5-8dd8-279eb4cb3389", 00:36:19.158 "is_configured": true, 00:36:19.158 "data_offset": 0, 00:36:19.158 "data_size": 65536 00:36:19.158 } 00:36:19.158 ] 00:36:19.158 } 00:36:19.158 } 00:36:19.158 }' 00:36:19.158 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:19.158 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:19.158 BaseBdev2 00:36:19.158 BaseBdev3 00:36:19.158 BaseBdev4' 00:36:19.158 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:19.158 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:19.158 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:19.416 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:19.416 "name": "BaseBdev1", 00:36:19.416 "aliases": [ 00:36:19.416 "5b78f237-a2ae-4700-b03d-2d4bf032c08c" 00:36:19.416 ], 00:36:19.416 "product_name": "Malloc disk", 00:36:19.416 "block_size": 512, 00:36:19.416 "num_blocks": 65536, 00:36:19.416 "uuid": "5b78f237-a2ae-4700-b03d-2d4bf032c08c", 00:36:19.416 "assigned_rate_limits": { 00:36:19.416 "rw_ios_per_sec": 0, 00:36:19.416 "rw_mbytes_per_sec": 0, 00:36:19.416 "r_mbytes_per_sec": 0, 00:36:19.416 "w_mbytes_per_sec": 0 00:36:19.416 }, 00:36:19.416 "claimed": true, 00:36:19.416 "claim_type": "exclusive_write", 00:36:19.416 "zoned": false, 00:36:19.416 "supported_io_types": { 00:36:19.416 "read": true, 00:36:19.416 "write": true, 00:36:19.416 "unmap": true, 00:36:19.416 "flush": true, 00:36:19.416 "reset": true, 00:36:19.416 "nvme_admin": false, 00:36:19.416 "nvme_io": false, 00:36:19.416 "nvme_io_md": false, 00:36:19.416 "write_zeroes": true, 00:36:19.416 "zcopy": true, 00:36:19.416 "get_zone_info": false, 00:36:19.416 "zone_management": false, 00:36:19.416 "zone_append": false, 00:36:19.416 "compare": false, 00:36:19.416 "compare_and_write": false, 00:36:19.416 "abort": true, 00:36:19.416 "seek_hole": false, 00:36:19.416 "seek_data": false, 00:36:19.416 "copy": true, 00:36:19.416 "nvme_iov_md": false 00:36:19.416 }, 00:36:19.416 "memory_domains": [ 00:36:19.416 { 00:36:19.416 "dma_device_id": "system", 00:36:19.416 "dma_device_type": 1 00:36:19.416 }, 00:36:19.416 { 00:36:19.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:19.416 "dma_device_type": 2 00:36:19.416 } 00:36:19.416 ], 00:36:19.416 "driver_specific": {} 00:36:19.416 }' 00:36:19.416 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:19.416 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:19.416 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:19.416 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:19.416 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:19.674 09:02:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:19.932 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:19.932 "name": "BaseBdev2", 00:36:19.932 "aliases": [ 00:36:19.932 "fd25d1a3-fdbe-406a-93ce-1580f252aa4c" 00:36:19.932 ], 00:36:19.932 "product_name": "Malloc disk", 00:36:19.932 "block_size": 512, 00:36:19.932 "num_blocks": 65536, 00:36:19.932 "uuid": "fd25d1a3-fdbe-406a-93ce-1580f252aa4c", 00:36:19.932 "assigned_rate_limits": { 00:36:19.932 "rw_ios_per_sec": 0, 00:36:19.932 "rw_mbytes_per_sec": 0, 00:36:19.932 "r_mbytes_per_sec": 0, 00:36:19.932 "w_mbytes_per_sec": 0 00:36:19.932 }, 00:36:19.932 "claimed": true, 00:36:19.932 "claim_type": "exclusive_write", 00:36:19.932 "zoned": false, 00:36:19.932 "supported_io_types": { 00:36:19.932 "read": true, 00:36:19.932 "write": true, 00:36:19.932 "unmap": true, 00:36:19.932 "flush": true, 00:36:19.932 "reset": true, 00:36:19.932 "nvme_admin": false, 00:36:19.932 "nvme_io": false, 00:36:19.932 "nvme_io_md": false, 00:36:19.932 "write_zeroes": true, 00:36:19.932 "zcopy": true, 00:36:19.932 "get_zone_info": false, 00:36:19.933 "zone_management": false, 00:36:19.933 "zone_append": false, 00:36:19.933 "compare": false, 00:36:19.933 "compare_and_write": false, 00:36:19.933 "abort": true, 00:36:19.933 "seek_hole": false, 00:36:19.933 "seek_data": false, 00:36:19.933 "copy": true, 00:36:19.933 "nvme_iov_md": false 00:36:19.933 }, 00:36:19.933 "memory_domains": [ 00:36:19.933 { 00:36:19.933 "dma_device_id": "system", 00:36:19.933 "dma_device_type": 1 00:36:19.933 }, 00:36:19.933 { 00:36:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:19.933 "dma_device_type": 2 00:36:19.933 } 00:36:19.933 ], 00:36:19.933 "driver_specific": {} 00:36:19.933 }' 00:36:19.933 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:20.190 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:20.190 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:20.190 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:20.190 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:20.190 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:20.190 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:36:20.448 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:20.706 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:20.706 "name": "BaseBdev3", 00:36:20.706 "aliases": [ 00:36:20.706 "e6548c3b-359a-437f-b6a0-660b0a6678cc" 00:36:20.706 ], 00:36:20.706 "product_name": "Malloc disk", 00:36:20.706 "block_size": 512, 00:36:20.706 "num_blocks": 65536, 00:36:20.706 "uuid": "e6548c3b-359a-437f-b6a0-660b0a6678cc", 00:36:20.706 "assigned_rate_limits": { 00:36:20.706 "rw_ios_per_sec": 0, 00:36:20.706 "rw_mbytes_per_sec": 0, 00:36:20.706 "r_mbytes_per_sec": 0, 00:36:20.706 "w_mbytes_per_sec": 0 00:36:20.706 }, 00:36:20.706 "claimed": true, 00:36:20.706 "claim_type": "exclusive_write", 00:36:20.706 "zoned": false, 00:36:20.706 "supported_io_types": { 00:36:20.706 "read": true, 00:36:20.706 "write": true, 00:36:20.706 "unmap": true, 00:36:20.706 "flush": true, 00:36:20.706 "reset": true, 00:36:20.706 "nvme_admin": false, 00:36:20.706 "nvme_io": false, 00:36:20.706 "nvme_io_md": false, 00:36:20.706 "write_zeroes": true, 00:36:20.706 "zcopy": true, 00:36:20.706 "get_zone_info": false, 00:36:20.706 "zone_management": false, 00:36:20.706 "zone_append": false, 00:36:20.706 "compare": false, 00:36:20.706 "compare_and_write": false, 00:36:20.706 "abort": true, 00:36:20.706 "seek_hole": false, 00:36:20.706 "seek_data": false, 00:36:20.706 "copy": true, 00:36:20.706 "nvme_iov_md": false 00:36:20.706 }, 00:36:20.706 "memory_domains": [ 00:36:20.706 { 00:36:20.706 "dma_device_id": "system", 00:36:20.706 "dma_device_type": 1 00:36:20.706 }, 00:36:20.706 { 00:36:20.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:20.706 "dma_device_type": 2 00:36:20.706 } 00:36:20.706 ], 00:36:20.706 "driver_specific": {} 00:36:20.706 }' 00:36:20.706 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:20.706 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:20.706 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:20.706 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:20.965 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:20.965 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:20.965 09:02:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:20.965 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:20.965 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:20.965 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:21.223 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:21.223 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:21.223 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:21.223 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:21.223 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:36:21.482 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:21.482 "name": "BaseBdev4", 00:36:21.482 "aliases": [ 00:36:21.482 "0e910e6d-f444-40c5-8dd8-279eb4cb3389" 00:36:21.482 ], 00:36:21.482 "product_name": "Malloc disk", 00:36:21.482 "block_size": 512, 00:36:21.482 "num_blocks": 65536, 00:36:21.482 "uuid": "0e910e6d-f444-40c5-8dd8-279eb4cb3389", 00:36:21.482 "assigned_rate_limits": { 00:36:21.482 "rw_ios_per_sec": 0, 00:36:21.482 "rw_mbytes_per_sec": 0, 00:36:21.482 "r_mbytes_per_sec": 0, 00:36:21.482 "w_mbytes_per_sec": 0 00:36:21.482 }, 00:36:21.482 "claimed": true, 00:36:21.482 "claim_type": "exclusive_write", 00:36:21.482 "zoned": false, 00:36:21.482 "supported_io_types": { 00:36:21.482 "read": true, 00:36:21.482 "write": true, 00:36:21.482 "unmap": true, 00:36:21.482 "flush": true, 00:36:21.482 "reset": true, 00:36:21.482 "nvme_admin": false, 00:36:21.482 "nvme_io": false, 00:36:21.482 "nvme_io_md": false, 00:36:21.482 "write_zeroes": true, 00:36:21.482 "zcopy": true, 00:36:21.482 "get_zone_info": false, 00:36:21.482 "zone_management": false, 00:36:21.482 "zone_append": false, 00:36:21.482 "compare": false, 00:36:21.482 "compare_and_write": false, 00:36:21.482 "abort": true, 00:36:21.482 "seek_hole": false, 00:36:21.482 "seek_data": false, 00:36:21.482 "copy": true, 00:36:21.482 "nvme_iov_md": false 00:36:21.482 }, 00:36:21.482 "memory_domains": [ 00:36:21.482 { 00:36:21.482 "dma_device_id": "system", 00:36:21.482 "dma_device_type": 1 00:36:21.482 }, 00:36:21.482 { 00:36:21.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:21.482 "dma_device_type": 2 00:36:21.482 } 00:36:21.482 ], 00:36:21.482 "driver_specific": {} 00:36:21.482 }' 00:36:21.482 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:21.482 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:21.482 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:21.482 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:21.482 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:21.741 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:21.741 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:21.741 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:21.741 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:21.741 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:21.741 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:22.000 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:22.000 09:02:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:22.000 [2024-07-12 09:02:57.166081] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:22.259 "name": "Existed_Raid", 00:36:22.259 "uuid": "27a4d915-e7da-4abb-b71c-0cbd6c63b69e", 00:36:22.259 "strip_size_kb": 64, 00:36:22.259 "state": "online", 00:36:22.259 "raid_level": "raid5f", 00:36:22.259 "superblock": false, 00:36:22.259 "num_base_bdevs": 4, 00:36:22.259 "num_base_bdevs_discovered": 3, 00:36:22.259 "num_base_bdevs_operational": 3, 00:36:22.259 "base_bdevs_list": [ 00:36:22.259 { 00:36:22.259 "name": null, 00:36:22.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:22.259 "is_configured": false, 00:36:22.259 "data_offset": 0, 00:36:22.259 "data_size": 65536 00:36:22.259 }, 00:36:22.259 { 00:36:22.259 "name": "BaseBdev2", 00:36:22.259 "uuid": "fd25d1a3-fdbe-406a-93ce-1580f252aa4c", 00:36:22.259 "is_configured": true, 00:36:22.259 "data_offset": 0, 00:36:22.259 "data_size": 65536 00:36:22.259 }, 00:36:22.259 { 00:36:22.259 "name": "BaseBdev3", 00:36:22.259 "uuid": "e6548c3b-359a-437f-b6a0-660b0a6678cc", 00:36:22.259 "is_configured": true, 00:36:22.259 "data_offset": 0, 00:36:22.259 "data_size": 65536 00:36:22.259 }, 00:36:22.259 { 00:36:22.259 "name": "BaseBdev4", 00:36:22.259 "uuid": "0e910e6d-f444-40c5-8dd8-279eb4cb3389", 00:36:22.259 "is_configured": true, 00:36:22.259 "data_offset": 0, 00:36:22.259 "data_size": 65536 00:36:22.259 } 00:36:22.259 ] 00:36:22.259 }' 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:22.259 09:02:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.194 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:23.194 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:23.194 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:23.194 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.194 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:23.194 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:23.194 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:23.453 [2024-07-12 09:02:58.604038] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:23.453 [2024-07-12 09:02:58.604163] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:23.713 [2024-07-12 09:02:58.674210] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:23.713 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:23.713 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:23.713 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.713 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:23.972 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:23.972 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:23.972 09:02:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:23.972 [2024-07-12 09:02:59.111361] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:24.231 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:24.231 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:24.231 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.231 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:24.496 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:24.496 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:24.496 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:36:24.496 [2024-07-12 09:02:59.614418] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:24.496 [2024-07-12 09:02:59.614481] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:36:24.754 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:24.754 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:24.754 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:24.754 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.012 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:25.012 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:25.012 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:36:25.012 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:36:25.012 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:36:25.013 09:02:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:25.013 BaseBdev2 00:36:25.013 09:03:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:36:25.013 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:36:25.013 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:25.013 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:25.013 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:25.013 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:25.013 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:25.272 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:25.531 [ 00:36:25.531 { 00:36:25.531 "name": "BaseBdev2", 00:36:25.531 "aliases": [ 00:36:25.531 "2479f55c-1da0-4b4e-bf42-e5e33c497669" 00:36:25.531 ], 00:36:25.531 "product_name": "Malloc disk", 00:36:25.531 "block_size": 512, 00:36:25.531 "num_blocks": 65536, 00:36:25.531 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:25.531 "assigned_rate_limits": { 00:36:25.531 "rw_ios_per_sec": 0, 00:36:25.531 "rw_mbytes_per_sec": 0, 00:36:25.531 "r_mbytes_per_sec": 0, 00:36:25.531 "w_mbytes_per_sec": 0 00:36:25.531 }, 00:36:25.531 "claimed": false, 00:36:25.531 "zoned": false, 00:36:25.531 "supported_io_types": { 00:36:25.531 "read": true, 00:36:25.531 "write": true, 00:36:25.531 "unmap": true, 00:36:25.531 "flush": true, 00:36:25.531 "reset": true, 00:36:25.531 "nvme_admin": false, 00:36:25.531 "nvme_io": false, 00:36:25.531 "nvme_io_md": false, 00:36:25.531 "write_zeroes": true, 00:36:25.531 "zcopy": true, 00:36:25.531 "get_zone_info": false, 00:36:25.531 "zone_management": false, 00:36:25.531 "zone_append": false, 00:36:25.531 "compare": false, 00:36:25.531 "compare_and_write": false, 00:36:25.531 "abort": true, 00:36:25.531 "seek_hole": false, 00:36:25.531 "seek_data": false, 00:36:25.531 "copy": true, 00:36:25.531 "nvme_iov_md": false 00:36:25.531 }, 00:36:25.531 "memory_domains": [ 00:36:25.531 { 00:36:25.531 "dma_device_id": "system", 00:36:25.531 "dma_device_type": 1 00:36:25.531 }, 00:36:25.531 { 00:36:25.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:25.531 "dma_device_type": 2 00:36:25.531 } 00:36:25.531 ], 00:36:25.531 "driver_specific": {} 00:36:25.531 } 00:36:25.531 ] 00:36:25.531 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:25.531 09:03:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:36:25.531 09:03:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:36:25.531 09:03:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:25.790 BaseBdev3 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:25.790 09:03:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:26.050 [ 00:36:26.050 { 00:36:26.050 "name": "BaseBdev3", 00:36:26.050 "aliases": [ 00:36:26.050 "d4d043e4-5f3f-4678-af63-64dea0afeebf" 00:36:26.050 ], 00:36:26.050 "product_name": "Malloc disk", 00:36:26.050 "block_size": 512, 00:36:26.050 "num_blocks": 65536, 00:36:26.050 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:26.050 "assigned_rate_limits": { 00:36:26.050 "rw_ios_per_sec": 0, 00:36:26.050 "rw_mbytes_per_sec": 0, 00:36:26.050 "r_mbytes_per_sec": 0, 00:36:26.050 "w_mbytes_per_sec": 0 00:36:26.050 }, 00:36:26.050 "claimed": false, 00:36:26.050 "zoned": false, 00:36:26.050 "supported_io_types": { 00:36:26.050 "read": true, 00:36:26.050 "write": true, 00:36:26.050 "unmap": true, 00:36:26.050 "flush": true, 00:36:26.050 "reset": true, 00:36:26.050 "nvme_admin": false, 00:36:26.050 "nvme_io": false, 00:36:26.050 "nvme_io_md": false, 00:36:26.050 "write_zeroes": true, 00:36:26.050 "zcopy": true, 00:36:26.050 "get_zone_info": false, 00:36:26.050 "zone_management": false, 00:36:26.050 "zone_append": false, 00:36:26.050 "compare": false, 00:36:26.050 "compare_and_write": false, 00:36:26.050 "abort": true, 00:36:26.050 "seek_hole": false, 00:36:26.050 "seek_data": false, 00:36:26.050 "copy": true, 00:36:26.050 "nvme_iov_md": false 00:36:26.050 }, 00:36:26.050 "memory_domains": [ 00:36:26.050 { 00:36:26.051 "dma_device_id": "system", 00:36:26.051 "dma_device_type": 1 00:36:26.051 }, 00:36:26.051 { 00:36:26.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.051 "dma_device_type": 2 00:36:26.051 } 00:36:26.051 ], 00:36:26.051 "driver_specific": {} 00:36:26.051 } 00:36:26.051 ] 00:36:26.051 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:26.051 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:36:26.051 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:36:26.051 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:36:26.319 BaseBdev4 00:36:26.319 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:36:26.319 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:36:26.319 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:26.319 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:26.319 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:26.319 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:26.319 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:26.590 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:26.590 [ 00:36:26.590 { 00:36:26.590 "name": "BaseBdev4", 00:36:26.590 "aliases": [ 00:36:26.590 "d0bce009-5a89-4187-b050-2bbc56233441" 00:36:26.590 ], 00:36:26.590 "product_name": "Malloc disk", 00:36:26.590 "block_size": 512, 00:36:26.590 "num_blocks": 65536, 00:36:26.590 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:26.590 "assigned_rate_limits": { 00:36:26.590 "rw_ios_per_sec": 0, 00:36:26.590 "rw_mbytes_per_sec": 0, 00:36:26.590 "r_mbytes_per_sec": 0, 00:36:26.590 "w_mbytes_per_sec": 0 00:36:26.590 }, 00:36:26.590 "claimed": false, 00:36:26.590 "zoned": false, 00:36:26.590 "supported_io_types": { 00:36:26.591 "read": true, 00:36:26.591 "write": true, 00:36:26.591 "unmap": true, 00:36:26.591 "flush": true, 00:36:26.591 "reset": true, 00:36:26.591 "nvme_admin": false, 00:36:26.591 "nvme_io": false, 00:36:26.591 "nvme_io_md": false, 00:36:26.591 "write_zeroes": true, 00:36:26.591 "zcopy": true, 00:36:26.591 "get_zone_info": false, 00:36:26.591 "zone_management": false, 00:36:26.591 "zone_append": false, 00:36:26.591 "compare": false, 00:36:26.591 "compare_and_write": false, 00:36:26.591 "abort": true, 00:36:26.591 "seek_hole": false, 00:36:26.591 "seek_data": false, 00:36:26.591 "copy": true, 00:36:26.591 "nvme_iov_md": false 00:36:26.591 }, 00:36:26.591 "memory_domains": [ 00:36:26.591 { 00:36:26.591 "dma_device_id": "system", 00:36:26.591 "dma_device_type": 1 00:36:26.591 }, 00:36:26.591 { 00:36:26.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.591 "dma_device_type": 2 00:36:26.591 } 00:36:26.591 ], 00:36:26.591 "driver_specific": {} 00:36:26.591 } 00:36:26.591 ] 00:36:26.591 09:03:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:26.591 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:36:26.591 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:36:26.591 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:26.853 [2024-07-12 09:03:01.956535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:26.853 [2024-07-12 09:03:01.956623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:26.853 [2024-07-12 09:03:01.956654] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:26.853 [2024-07-12 09:03:01.958558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:26.853 [2024-07-12 09:03:01.958641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.853 09:03:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:27.111 09:03:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:27.111 "name": "Existed_Raid", 00:36:27.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:27.111 "strip_size_kb": 64, 00:36:27.111 "state": "configuring", 00:36:27.111 "raid_level": "raid5f", 00:36:27.111 "superblock": false, 00:36:27.111 "num_base_bdevs": 4, 00:36:27.111 "num_base_bdevs_discovered": 3, 00:36:27.111 "num_base_bdevs_operational": 4, 00:36:27.111 "base_bdevs_list": [ 00:36:27.111 { 00:36:27.111 "name": "BaseBdev1", 00:36:27.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:27.111 "is_configured": false, 00:36:27.111 "data_offset": 0, 00:36:27.111 "data_size": 0 00:36:27.111 }, 00:36:27.111 { 00:36:27.111 "name": "BaseBdev2", 00:36:27.111 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:27.111 "is_configured": true, 00:36:27.111 "data_offset": 0, 00:36:27.111 "data_size": 65536 00:36:27.111 }, 00:36:27.111 { 00:36:27.111 "name": "BaseBdev3", 00:36:27.111 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:27.111 "is_configured": true, 00:36:27.111 "data_offset": 0, 00:36:27.111 "data_size": 65536 00:36:27.111 }, 00:36:27.111 { 00:36:27.111 "name": "BaseBdev4", 00:36:27.111 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:27.111 "is_configured": true, 00:36:27.111 "data_offset": 0, 00:36:27.111 "data_size": 65536 00:36:27.111 } 00:36:27.111 ] 00:36:27.111 }' 00:36:27.111 09:03:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:27.111 09:03:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.677 09:03:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:36:27.934 [2024-07-12 09:03:03.016706] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:27.934 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.935 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:28.191 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:28.191 "name": "Existed_Raid", 00:36:28.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.191 "strip_size_kb": 64, 00:36:28.191 "state": "configuring", 00:36:28.191 "raid_level": "raid5f", 00:36:28.191 "superblock": false, 00:36:28.191 "num_base_bdevs": 4, 00:36:28.191 "num_base_bdevs_discovered": 2, 00:36:28.191 "num_base_bdevs_operational": 4, 00:36:28.191 "base_bdevs_list": [ 00:36:28.191 { 00:36:28.191 "name": "BaseBdev1", 00:36:28.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.191 "is_configured": false, 00:36:28.191 "data_offset": 0, 00:36:28.191 "data_size": 0 00:36:28.191 }, 00:36:28.191 { 00:36:28.192 "name": null, 00:36:28.192 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:28.192 "is_configured": false, 00:36:28.192 "data_offset": 0, 00:36:28.192 "data_size": 65536 00:36:28.192 }, 00:36:28.192 { 00:36:28.192 "name": "BaseBdev3", 00:36:28.192 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:28.192 "is_configured": true, 00:36:28.192 "data_offset": 0, 00:36:28.192 "data_size": 65536 00:36:28.192 }, 00:36:28.192 { 00:36:28.192 "name": "BaseBdev4", 00:36:28.192 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:28.192 "is_configured": true, 00:36:28.192 "data_offset": 0, 00:36:28.192 "data_size": 65536 00:36:28.192 } 00:36:28.192 ] 00:36:28.192 }' 00:36:28.192 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:28.192 09:03:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:29.124 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.124 09:03:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:29.124 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:36:29.124 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:29.382 [2024-07-12 09:03:04.502141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:29.382 BaseBdev1 00:36:29.382 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:36:29.382 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:36:29.382 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:29.382 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:29.382 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:29.382 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:29.382 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:29.641 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:29.901 [ 00:36:29.901 { 00:36:29.901 "name": "BaseBdev1", 00:36:29.901 "aliases": [ 00:36:29.901 "c582533f-c1fc-4097-9391-832907ff63df" 00:36:29.901 ], 00:36:29.901 "product_name": "Malloc disk", 00:36:29.901 "block_size": 512, 00:36:29.901 "num_blocks": 65536, 00:36:29.901 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:29.901 "assigned_rate_limits": { 00:36:29.901 "rw_ios_per_sec": 0, 00:36:29.901 "rw_mbytes_per_sec": 0, 00:36:29.901 "r_mbytes_per_sec": 0, 00:36:29.901 "w_mbytes_per_sec": 0 00:36:29.901 }, 00:36:29.901 "claimed": true, 00:36:29.901 "claim_type": "exclusive_write", 00:36:29.901 "zoned": false, 00:36:29.901 "supported_io_types": { 00:36:29.901 "read": true, 00:36:29.901 "write": true, 00:36:29.901 "unmap": true, 00:36:29.901 "flush": true, 00:36:29.901 "reset": true, 00:36:29.901 "nvme_admin": false, 00:36:29.901 "nvme_io": false, 00:36:29.901 "nvme_io_md": false, 00:36:29.901 "write_zeroes": true, 00:36:29.901 "zcopy": true, 00:36:29.901 "get_zone_info": false, 00:36:29.901 "zone_management": false, 00:36:29.901 "zone_append": false, 00:36:29.901 "compare": false, 00:36:29.901 "compare_and_write": false, 00:36:29.901 "abort": true, 00:36:29.901 "seek_hole": false, 00:36:29.901 "seek_data": false, 00:36:29.901 "copy": true, 00:36:29.901 "nvme_iov_md": false 00:36:29.901 }, 00:36:29.901 "memory_domains": [ 00:36:29.901 { 00:36:29.901 "dma_device_id": "system", 00:36:29.901 "dma_device_type": 1 00:36:29.901 }, 00:36:29.901 { 00:36:29.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:29.901 "dma_device_type": 2 00:36:29.901 } 00:36:29.901 ], 00:36:29.901 "driver_specific": {} 00:36:29.901 } 00:36:29.901 ] 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.901 09:03:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.160 09:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:30.160 "name": "Existed_Raid", 00:36:30.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.160 "strip_size_kb": 64, 00:36:30.160 "state": "configuring", 00:36:30.160 "raid_level": "raid5f", 00:36:30.160 "superblock": false, 00:36:30.160 "num_base_bdevs": 4, 00:36:30.160 "num_base_bdevs_discovered": 3, 00:36:30.160 "num_base_bdevs_operational": 4, 00:36:30.160 "base_bdevs_list": [ 00:36:30.160 { 00:36:30.160 "name": "BaseBdev1", 00:36:30.160 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:30.160 "is_configured": true, 00:36:30.160 "data_offset": 0, 00:36:30.160 "data_size": 65536 00:36:30.160 }, 00:36:30.160 { 00:36:30.160 "name": null, 00:36:30.160 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:30.160 "is_configured": false, 00:36:30.160 "data_offset": 0, 00:36:30.160 "data_size": 65536 00:36:30.160 }, 00:36:30.160 { 00:36:30.160 "name": "BaseBdev3", 00:36:30.160 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:30.160 "is_configured": true, 00:36:30.160 "data_offset": 0, 00:36:30.160 "data_size": 65536 00:36:30.160 }, 00:36:30.160 { 00:36:30.160 "name": "BaseBdev4", 00:36:30.160 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:30.160 "is_configured": true, 00:36:30.160 "data_offset": 0, 00:36:30.160 "data_size": 65536 00:36:30.160 } 00:36:30.160 ] 00:36:30.160 }' 00:36:30.160 09:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:30.160 09:03:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:30.727 09:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.727 09:03:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:30.986 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:36:30.986 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:36:31.245 [2024-07-12 09:03:06.202528] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.245 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:31.503 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:31.503 "name": "Existed_Raid", 00:36:31.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.503 "strip_size_kb": 64, 00:36:31.503 "state": "configuring", 00:36:31.503 "raid_level": "raid5f", 00:36:31.503 "superblock": false, 00:36:31.503 "num_base_bdevs": 4, 00:36:31.503 "num_base_bdevs_discovered": 2, 00:36:31.503 "num_base_bdevs_operational": 4, 00:36:31.503 "base_bdevs_list": [ 00:36:31.503 { 00:36:31.503 "name": "BaseBdev1", 00:36:31.503 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:31.503 "is_configured": true, 00:36:31.503 "data_offset": 0, 00:36:31.504 "data_size": 65536 00:36:31.504 }, 00:36:31.504 { 00:36:31.504 "name": null, 00:36:31.504 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:31.504 "is_configured": false, 00:36:31.504 "data_offset": 0, 00:36:31.504 "data_size": 65536 00:36:31.504 }, 00:36:31.504 { 00:36:31.504 "name": null, 00:36:31.504 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:31.504 "is_configured": false, 00:36:31.504 "data_offset": 0, 00:36:31.504 "data_size": 65536 00:36:31.504 }, 00:36:31.504 { 00:36:31.504 "name": "BaseBdev4", 00:36:31.504 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:31.504 "is_configured": true, 00:36:31.504 "data_offset": 0, 00:36:31.504 "data_size": 65536 00:36:31.504 } 00:36:31.504 ] 00:36:31.504 }' 00:36:31.504 09:03:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:31.504 09:03:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:32.070 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:32.070 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:32.328 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:36:32.328 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:32.586 [2024-07-12 09:03:07.586827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:32.586 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:32.845 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:32.845 "name": "Existed_Raid", 00:36:32.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:32.845 "strip_size_kb": 64, 00:36:32.845 "state": "configuring", 00:36:32.845 "raid_level": "raid5f", 00:36:32.845 "superblock": false, 00:36:32.845 "num_base_bdevs": 4, 00:36:32.845 "num_base_bdevs_discovered": 3, 00:36:32.845 "num_base_bdevs_operational": 4, 00:36:32.845 "base_bdevs_list": [ 00:36:32.845 { 00:36:32.845 "name": "BaseBdev1", 00:36:32.845 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:32.845 "is_configured": true, 00:36:32.845 "data_offset": 0, 00:36:32.845 "data_size": 65536 00:36:32.845 }, 00:36:32.845 { 00:36:32.845 "name": null, 00:36:32.845 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:32.845 "is_configured": false, 00:36:32.845 "data_offset": 0, 00:36:32.845 "data_size": 65536 00:36:32.845 }, 00:36:32.845 { 00:36:32.845 "name": "BaseBdev3", 00:36:32.845 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:32.845 "is_configured": true, 00:36:32.845 "data_offset": 0, 00:36:32.845 "data_size": 65536 00:36:32.845 }, 00:36:32.845 { 00:36:32.845 "name": "BaseBdev4", 00:36:32.845 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:32.845 "is_configured": true, 00:36:32.845 "data_offset": 0, 00:36:32.845 "data_size": 65536 00:36:32.845 } 00:36:32.845 ] 00:36:32.845 }' 00:36:32.845 09:03:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:32.845 09:03:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:33.412 09:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.412 09:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:33.670 09:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:36:33.671 09:03:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:33.929 [2024-07-12 09:03:09.071070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.188 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:34.447 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:34.447 "name": "Existed_Raid", 00:36:34.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.447 "strip_size_kb": 64, 00:36:34.447 "state": "configuring", 00:36:34.447 "raid_level": "raid5f", 00:36:34.447 "superblock": false, 00:36:34.447 "num_base_bdevs": 4, 00:36:34.447 "num_base_bdevs_discovered": 2, 00:36:34.447 "num_base_bdevs_operational": 4, 00:36:34.447 "base_bdevs_list": [ 00:36:34.447 { 00:36:34.447 "name": null, 00:36:34.447 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:34.447 "is_configured": false, 00:36:34.447 "data_offset": 0, 00:36:34.447 "data_size": 65536 00:36:34.447 }, 00:36:34.447 { 00:36:34.447 "name": null, 00:36:34.447 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:34.447 "is_configured": false, 00:36:34.447 "data_offset": 0, 00:36:34.447 "data_size": 65536 00:36:34.447 }, 00:36:34.447 { 00:36:34.447 "name": "BaseBdev3", 00:36:34.447 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:34.447 "is_configured": true, 00:36:34.447 "data_offset": 0, 00:36:34.447 "data_size": 65536 00:36:34.447 }, 00:36:34.447 { 00:36:34.447 "name": "BaseBdev4", 00:36:34.447 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:34.447 "is_configured": true, 00:36:34.447 "data_offset": 0, 00:36:34.447 "data_size": 65536 00:36:34.447 } 00:36:34.447 ] 00:36:34.447 }' 00:36:34.447 09:03:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:34.447 09:03:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:35.015 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.015 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:35.274 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:36:35.274 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:35.532 [2024-07-12 09:03:10.520449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.532 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:35.791 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:35.791 "name": "Existed_Raid", 00:36:35.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:35.791 "strip_size_kb": 64, 00:36:35.791 "state": "configuring", 00:36:35.791 "raid_level": "raid5f", 00:36:35.791 "superblock": false, 00:36:35.791 "num_base_bdevs": 4, 00:36:35.791 "num_base_bdevs_discovered": 3, 00:36:35.791 "num_base_bdevs_operational": 4, 00:36:35.791 "base_bdevs_list": [ 00:36:35.791 { 00:36:35.791 "name": null, 00:36:35.791 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:35.791 "is_configured": false, 00:36:35.791 "data_offset": 0, 00:36:35.791 "data_size": 65536 00:36:35.791 }, 00:36:35.791 { 00:36:35.791 "name": "BaseBdev2", 00:36:35.791 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:35.791 "is_configured": true, 00:36:35.791 "data_offset": 0, 00:36:35.791 "data_size": 65536 00:36:35.791 }, 00:36:35.791 { 00:36:35.791 "name": "BaseBdev3", 00:36:35.791 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:35.791 "is_configured": true, 00:36:35.791 "data_offset": 0, 00:36:35.791 "data_size": 65536 00:36:35.791 }, 00:36:35.791 { 00:36:35.791 "name": "BaseBdev4", 00:36:35.791 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:35.791 "is_configured": true, 00:36:35.791 "data_offset": 0, 00:36:35.791 "data_size": 65536 00:36:35.791 } 00:36:35.791 ] 00:36:35.791 }' 00:36:35.791 09:03:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:35.791 09:03:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.727 09:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.727 09:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:36.727 09:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:36:36.727 09:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.727 09:03:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:36.985 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c582533f-c1fc-4097-9391-832907ff63df 00:36:37.244 [2024-07-12 09:03:12.306533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:37.244 [2024-07-12 09:03:12.306589] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:36:37.244 [2024-07-12 09:03:12.306599] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:37.244 [2024-07-12 09:03:12.306704] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:37.244 [2024-07-12 09:03:12.311821] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:36:37.244 [2024-07-12 09:03:12.311845] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:36:37.244 [2024-07-12 09:03:12.312122] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:37.244 NewBaseBdev 00:36:37.244 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:36:37.244 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:36:37.244 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:37.244 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:36:37.244 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:37.244 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:37.244 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:37.503 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:37.761 [ 00:36:37.761 { 00:36:37.761 "name": "NewBaseBdev", 00:36:37.761 "aliases": [ 00:36:37.761 "c582533f-c1fc-4097-9391-832907ff63df" 00:36:37.761 ], 00:36:37.761 "product_name": "Malloc disk", 00:36:37.761 "block_size": 512, 00:36:37.761 "num_blocks": 65536, 00:36:37.761 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:37.761 "assigned_rate_limits": { 00:36:37.761 "rw_ios_per_sec": 0, 00:36:37.761 "rw_mbytes_per_sec": 0, 00:36:37.761 "r_mbytes_per_sec": 0, 00:36:37.761 "w_mbytes_per_sec": 0 00:36:37.761 }, 00:36:37.761 "claimed": true, 00:36:37.761 "claim_type": "exclusive_write", 00:36:37.761 "zoned": false, 00:36:37.761 "supported_io_types": { 00:36:37.761 "read": true, 00:36:37.761 "write": true, 00:36:37.761 "unmap": true, 00:36:37.761 "flush": true, 00:36:37.761 "reset": true, 00:36:37.761 "nvme_admin": false, 00:36:37.761 "nvme_io": false, 00:36:37.761 "nvme_io_md": false, 00:36:37.761 "write_zeroes": true, 00:36:37.761 "zcopy": true, 00:36:37.761 "get_zone_info": false, 00:36:37.761 "zone_management": false, 00:36:37.761 "zone_append": false, 00:36:37.761 "compare": false, 00:36:37.761 "compare_and_write": false, 00:36:37.761 "abort": true, 00:36:37.761 "seek_hole": false, 00:36:37.761 "seek_data": false, 00:36:37.761 "copy": true, 00:36:37.761 "nvme_iov_md": false 00:36:37.761 }, 00:36:37.761 "memory_domains": [ 00:36:37.761 { 00:36:37.761 "dma_device_id": "system", 00:36:37.761 "dma_device_type": 1 00:36:37.761 }, 00:36:37.761 { 00:36:37.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:37.761 "dma_device_type": 2 00:36:37.761 } 00:36:37.761 ], 00:36:37.761 "driver_specific": {} 00:36:37.761 } 00:36:37.761 ] 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.761 09:03:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:38.033 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:38.033 "name": "Existed_Raid", 00:36:38.033 "uuid": "eebff7c7-63e4-4595-ae54-772754e34b26", 00:36:38.033 "strip_size_kb": 64, 00:36:38.033 "state": "online", 00:36:38.033 "raid_level": "raid5f", 00:36:38.033 "superblock": false, 00:36:38.033 "num_base_bdevs": 4, 00:36:38.033 "num_base_bdevs_discovered": 4, 00:36:38.033 "num_base_bdevs_operational": 4, 00:36:38.033 "base_bdevs_list": [ 00:36:38.033 { 00:36:38.033 "name": "NewBaseBdev", 00:36:38.033 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:38.033 "is_configured": true, 00:36:38.033 "data_offset": 0, 00:36:38.033 "data_size": 65536 00:36:38.033 }, 00:36:38.033 { 00:36:38.033 "name": "BaseBdev2", 00:36:38.033 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:38.033 "is_configured": true, 00:36:38.033 "data_offset": 0, 00:36:38.033 "data_size": 65536 00:36:38.033 }, 00:36:38.033 { 00:36:38.033 "name": "BaseBdev3", 00:36:38.033 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:38.033 "is_configured": true, 00:36:38.033 "data_offset": 0, 00:36:38.033 "data_size": 65536 00:36:38.033 }, 00:36:38.033 { 00:36:38.033 "name": "BaseBdev4", 00:36:38.033 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:38.033 "is_configured": true, 00:36:38.033 "data_offset": 0, 00:36:38.033 "data_size": 65536 00:36:38.033 } 00:36:38.033 ] 00:36:38.033 }' 00:36:38.033 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:38.033 09:03:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:38.605 09:03:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:39.171 [2024-07-12 09:03:14.062398] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:39.171 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:39.171 "name": "Existed_Raid", 00:36:39.171 "aliases": [ 00:36:39.171 "eebff7c7-63e4-4595-ae54-772754e34b26" 00:36:39.171 ], 00:36:39.171 "product_name": "Raid Volume", 00:36:39.171 "block_size": 512, 00:36:39.171 "num_blocks": 196608, 00:36:39.171 "uuid": "eebff7c7-63e4-4595-ae54-772754e34b26", 00:36:39.171 "assigned_rate_limits": { 00:36:39.171 "rw_ios_per_sec": 0, 00:36:39.171 "rw_mbytes_per_sec": 0, 00:36:39.171 "r_mbytes_per_sec": 0, 00:36:39.171 "w_mbytes_per_sec": 0 00:36:39.171 }, 00:36:39.171 "claimed": false, 00:36:39.171 "zoned": false, 00:36:39.171 "supported_io_types": { 00:36:39.171 "read": true, 00:36:39.171 "write": true, 00:36:39.171 "unmap": false, 00:36:39.171 "flush": false, 00:36:39.171 "reset": true, 00:36:39.171 "nvme_admin": false, 00:36:39.171 "nvme_io": false, 00:36:39.171 "nvme_io_md": false, 00:36:39.171 "write_zeroes": true, 00:36:39.171 "zcopy": false, 00:36:39.171 "get_zone_info": false, 00:36:39.171 "zone_management": false, 00:36:39.171 "zone_append": false, 00:36:39.171 "compare": false, 00:36:39.171 "compare_and_write": false, 00:36:39.171 "abort": false, 00:36:39.171 "seek_hole": false, 00:36:39.171 "seek_data": false, 00:36:39.171 "copy": false, 00:36:39.171 "nvme_iov_md": false 00:36:39.171 }, 00:36:39.171 "driver_specific": { 00:36:39.171 "raid": { 00:36:39.171 "uuid": "eebff7c7-63e4-4595-ae54-772754e34b26", 00:36:39.171 "strip_size_kb": 64, 00:36:39.171 "state": "online", 00:36:39.171 "raid_level": "raid5f", 00:36:39.171 "superblock": false, 00:36:39.171 "num_base_bdevs": 4, 00:36:39.171 "num_base_bdevs_discovered": 4, 00:36:39.171 "num_base_bdevs_operational": 4, 00:36:39.171 "base_bdevs_list": [ 00:36:39.171 { 00:36:39.171 "name": "NewBaseBdev", 00:36:39.171 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:39.171 "is_configured": true, 00:36:39.171 "data_offset": 0, 00:36:39.171 "data_size": 65536 00:36:39.171 }, 00:36:39.171 { 00:36:39.171 "name": "BaseBdev2", 00:36:39.171 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:39.171 "is_configured": true, 00:36:39.171 "data_offset": 0, 00:36:39.171 "data_size": 65536 00:36:39.171 }, 00:36:39.171 { 00:36:39.171 "name": "BaseBdev3", 00:36:39.171 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:39.171 "is_configured": true, 00:36:39.171 "data_offset": 0, 00:36:39.171 "data_size": 65536 00:36:39.171 }, 00:36:39.171 { 00:36:39.171 "name": "BaseBdev4", 00:36:39.171 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:39.171 "is_configured": true, 00:36:39.171 "data_offset": 0, 00:36:39.171 "data_size": 65536 00:36:39.171 } 00:36:39.171 ] 00:36:39.171 } 00:36:39.171 } 00:36:39.171 }' 00:36:39.172 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:39.172 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:36:39.172 BaseBdev2 00:36:39.172 BaseBdev3 00:36:39.172 BaseBdev4' 00:36:39.172 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:39.172 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:36:39.172 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:39.172 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:39.172 "name": "NewBaseBdev", 00:36:39.172 "aliases": [ 00:36:39.172 "c582533f-c1fc-4097-9391-832907ff63df" 00:36:39.172 ], 00:36:39.172 "product_name": "Malloc disk", 00:36:39.172 "block_size": 512, 00:36:39.172 "num_blocks": 65536, 00:36:39.172 "uuid": "c582533f-c1fc-4097-9391-832907ff63df", 00:36:39.172 "assigned_rate_limits": { 00:36:39.172 "rw_ios_per_sec": 0, 00:36:39.172 "rw_mbytes_per_sec": 0, 00:36:39.172 "r_mbytes_per_sec": 0, 00:36:39.172 "w_mbytes_per_sec": 0 00:36:39.172 }, 00:36:39.172 "claimed": true, 00:36:39.172 "claim_type": "exclusive_write", 00:36:39.172 "zoned": false, 00:36:39.172 "supported_io_types": { 00:36:39.172 "read": true, 00:36:39.172 "write": true, 00:36:39.172 "unmap": true, 00:36:39.172 "flush": true, 00:36:39.172 "reset": true, 00:36:39.172 "nvme_admin": false, 00:36:39.172 "nvme_io": false, 00:36:39.172 "nvme_io_md": false, 00:36:39.172 "write_zeroes": true, 00:36:39.172 "zcopy": true, 00:36:39.172 "get_zone_info": false, 00:36:39.172 "zone_management": false, 00:36:39.172 "zone_append": false, 00:36:39.172 "compare": false, 00:36:39.172 "compare_and_write": false, 00:36:39.172 "abort": true, 00:36:39.172 "seek_hole": false, 00:36:39.172 "seek_data": false, 00:36:39.172 "copy": true, 00:36:39.172 "nvme_iov_md": false 00:36:39.172 }, 00:36:39.172 "memory_domains": [ 00:36:39.172 { 00:36:39.172 "dma_device_id": "system", 00:36:39.172 "dma_device_type": 1 00:36:39.172 }, 00:36:39.172 { 00:36:39.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.172 "dma_device_type": 2 00:36:39.172 } 00:36:39.172 ], 00:36:39.172 "driver_specific": {} 00:36:39.172 }' 00:36:39.172 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:39.431 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:39.431 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:39.431 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:39.431 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:39.431 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:39.431 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:39.431 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:39.690 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:39.690 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:39.690 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:39.690 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:39.690 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:39.690 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:39.690 09:03:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:39.949 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:39.949 "name": "BaseBdev2", 00:36:39.949 "aliases": [ 00:36:39.949 "2479f55c-1da0-4b4e-bf42-e5e33c497669" 00:36:39.949 ], 00:36:39.949 "product_name": "Malloc disk", 00:36:39.949 "block_size": 512, 00:36:39.949 "num_blocks": 65536, 00:36:39.949 "uuid": "2479f55c-1da0-4b4e-bf42-e5e33c497669", 00:36:39.949 "assigned_rate_limits": { 00:36:39.949 "rw_ios_per_sec": 0, 00:36:39.949 "rw_mbytes_per_sec": 0, 00:36:39.949 "r_mbytes_per_sec": 0, 00:36:39.949 "w_mbytes_per_sec": 0 00:36:39.949 }, 00:36:39.949 "claimed": true, 00:36:39.949 "claim_type": "exclusive_write", 00:36:39.949 "zoned": false, 00:36:39.949 "supported_io_types": { 00:36:39.949 "read": true, 00:36:39.949 "write": true, 00:36:39.949 "unmap": true, 00:36:39.949 "flush": true, 00:36:39.949 "reset": true, 00:36:39.949 "nvme_admin": false, 00:36:39.949 "nvme_io": false, 00:36:39.949 "nvme_io_md": false, 00:36:39.949 "write_zeroes": true, 00:36:39.949 "zcopy": true, 00:36:39.949 "get_zone_info": false, 00:36:39.949 "zone_management": false, 00:36:39.949 "zone_append": false, 00:36:39.949 "compare": false, 00:36:39.949 "compare_and_write": false, 00:36:39.949 "abort": true, 00:36:39.949 "seek_hole": false, 00:36:39.949 "seek_data": false, 00:36:39.949 "copy": true, 00:36:39.949 "nvme_iov_md": false 00:36:39.949 }, 00:36:39.949 "memory_domains": [ 00:36:39.949 { 00:36:39.949 "dma_device_id": "system", 00:36:39.949 "dma_device_type": 1 00:36:39.949 }, 00:36:39.949 { 00:36:39.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.949 "dma_device_type": 2 00:36:39.949 } 00:36:39.949 ], 00:36:39.949 "driver_specific": {} 00:36:39.949 }' 00:36:39.949 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:39.949 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:39.949 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:39.949 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:40.208 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:40.208 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:40.208 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:40.208 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:40.208 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:40.208 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:40.208 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:40.467 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:40.467 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:40.467 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:36:40.467 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:40.725 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:40.726 "name": "BaseBdev3", 00:36:40.726 "aliases": [ 00:36:40.726 "d4d043e4-5f3f-4678-af63-64dea0afeebf" 00:36:40.726 ], 00:36:40.726 "product_name": "Malloc disk", 00:36:40.726 "block_size": 512, 00:36:40.726 "num_blocks": 65536, 00:36:40.726 "uuid": "d4d043e4-5f3f-4678-af63-64dea0afeebf", 00:36:40.726 "assigned_rate_limits": { 00:36:40.726 "rw_ios_per_sec": 0, 00:36:40.726 "rw_mbytes_per_sec": 0, 00:36:40.726 "r_mbytes_per_sec": 0, 00:36:40.726 "w_mbytes_per_sec": 0 00:36:40.726 }, 00:36:40.726 "claimed": true, 00:36:40.726 "claim_type": "exclusive_write", 00:36:40.726 "zoned": false, 00:36:40.726 "supported_io_types": { 00:36:40.726 "read": true, 00:36:40.726 "write": true, 00:36:40.726 "unmap": true, 00:36:40.726 "flush": true, 00:36:40.726 "reset": true, 00:36:40.726 "nvme_admin": false, 00:36:40.726 "nvme_io": false, 00:36:40.726 "nvme_io_md": false, 00:36:40.726 "write_zeroes": true, 00:36:40.726 "zcopy": true, 00:36:40.726 "get_zone_info": false, 00:36:40.726 "zone_management": false, 00:36:40.726 "zone_append": false, 00:36:40.726 "compare": false, 00:36:40.726 "compare_and_write": false, 00:36:40.726 "abort": true, 00:36:40.726 "seek_hole": false, 00:36:40.726 "seek_data": false, 00:36:40.726 "copy": true, 00:36:40.726 "nvme_iov_md": false 00:36:40.726 }, 00:36:40.726 "memory_domains": [ 00:36:40.726 { 00:36:40.726 "dma_device_id": "system", 00:36:40.726 "dma_device_type": 1 00:36:40.726 }, 00:36:40.726 { 00:36:40.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:40.726 "dma_device_type": 2 00:36:40.726 } 00:36:40.726 ], 00:36:40.726 "driver_specific": {} 00:36:40.726 }' 00:36:40.726 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:40.726 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:40.726 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:40.726 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:40.726 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:40.985 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:40.985 09:03:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:40.985 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:40.985 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:40.985 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:40.985 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:41.243 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:41.243 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:41.243 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:36:41.243 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:41.505 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:41.505 "name": "BaseBdev4", 00:36:41.505 "aliases": [ 00:36:41.505 "d0bce009-5a89-4187-b050-2bbc56233441" 00:36:41.505 ], 00:36:41.505 "product_name": "Malloc disk", 00:36:41.505 "block_size": 512, 00:36:41.505 "num_blocks": 65536, 00:36:41.505 "uuid": "d0bce009-5a89-4187-b050-2bbc56233441", 00:36:41.505 "assigned_rate_limits": { 00:36:41.505 "rw_ios_per_sec": 0, 00:36:41.505 "rw_mbytes_per_sec": 0, 00:36:41.505 "r_mbytes_per_sec": 0, 00:36:41.505 "w_mbytes_per_sec": 0 00:36:41.505 }, 00:36:41.505 "claimed": true, 00:36:41.505 "claim_type": "exclusive_write", 00:36:41.505 "zoned": false, 00:36:41.505 "supported_io_types": { 00:36:41.505 "read": true, 00:36:41.505 "write": true, 00:36:41.505 "unmap": true, 00:36:41.505 "flush": true, 00:36:41.505 "reset": true, 00:36:41.505 "nvme_admin": false, 00:36:41.505 "nvme_io": false, 00:36:41.505 "nvme_io_md": false, 00:36:41.505 "write_zeroes": true, 00:36:41.505 "zcopy": true, 00:36:41.505 "get_zone_info": false, 00:36:41.505 "zone_management": false, 00:36:41.505 "zone_append": false, 00:36:41.505 "compare": false, 00:36:41.505 "compare_and_write": false, 00:36:41.505 "abort": true, 00:36:41.505 "seek_hole": false, 00:36:41.505 "seek_data": false, 00:36:41.505 "copy": true, 00:36:41.505 "nvme_iov_md": false 00:36:41.505 }, 00:36:41.505 "memory_domains": [ 00:36:41.505 { 00:36:41.505 "dma_device_id": "system", 00:36:41.505 "dma_device_type": 1 00:36:41.505 }, 00:36:41.505 { 00:36:41.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:41.505 "dma_device_type": 2 00:36:41.505 } 00:36:41.505 ], 00:36:41.505 "driver_specific": {} 00:36:41.505 }' 00:36:41.505 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:41.505 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:41.505 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:41.505 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:41.505 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:41.764 09:03:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:42.023 [2024-07-12 09:03:17.178845] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:42.023 [2024-07-12 09:03:17.178879] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:42.023 [2024-07-12 09:03:17.178965] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:42.023 [2024-07-12 09:03:17.179271] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:42.023 [2024-07-12 09:03:17.179293] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 156992 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 156992 ']' 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 156992 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156992 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156992' 00:36:42.023 killing process with pid 156992 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 156992 00:36:42.023 09:03:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 156992 00:36:42.023 [2024-07-12 09:03:17.212912] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:42.282 [2024-07-12 09:03:17.469275] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:43.217 ************************************ 00:36:43.217 END TEST raid5f_state_function_test 00:36:43.217 ************************************ 00:36:43.217 09:03:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:36:43.217 00:36:43.217 real 0m34.640s 00:36:43.217 user 1m5.427s 00:36:43.217 sys 0m3.654s 00:36:43.217 09:03:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:43.217 09:03:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.476 09:03:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:43.476 09:03:18 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:36:43.476 09:03:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:36:43.476 09:03:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:43.476 09:03:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:43.476 ************************************ 00:36:43.476 START TEST raid5f_state_function_test_sb 00:36:43.476 ************************************ 00:36:43.476 09:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 true 00:36:43.476 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:36:43.476 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:36:43.476 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=158132 00:36:43.477 Process raid pid: 158132 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 158132' 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 158132 /var/tmp/spdk-raid.sock 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 158132 ']' 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:43.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:43.477 09:03:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:43.477 [2024-07-12 09:03:18.512929] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:36:43.477 [2024-07-12 09:03:18.513776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.736 [2024-07-12 09:03:18.688150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.994 [2024-07-12 09:03:18.940185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.994 [2024-07-12 09:03:19.130017] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:44.252 09:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:44.252 09:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:36:44.252 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:44.510 [2024-07-12 09:03:19.590663] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:44.510 [2024-07-12 09:03:19.590877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:44.510 [2024-07-12 09:03:19.590984] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:44.510 [2024-07-12 09:03:19.591043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:44.510 [2024-07-12 09:03:19.591129] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:44.510 [2024-07-12 09:03:19.591266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:44.510 [2024-07-12 09:03:19.591355] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:44.510 [2024-07-12 09:03:19.591412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:44.510 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:44.768 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:44.768 "name": "Existed_Raid", 00:36:44.768 "uuid": "4be305b9-43b2-4e07-9e3a-d5ccf99eb250", 00:36:44.768 "strip_size_kb": 64, 00:36:44.768 "state": "configuring", 00:36:44.768 "raid_level": "raid5f", 00:36:44.768 "superblock": true, 00:36:44.768 "num_base_bdevs": 4, 00:36:44.768 "num_base_bdevs_discovered": 0, 00:36:44.768 "num_base_bdevs_operational": 4, 00:36:44.768 "base_bdevs_list": [ 00:36:44.768 { 00:36:44.768 "name": "BaseBdev1", 00:36:44.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.768 "is_configured": false, 00:36:44.768 "data_offset": 0, 00:36:44.768 "data_size": 0 00:36:44.768 }, 00:36:44.768 { 00:36:44.768 "name": "BaseBdev2", 00:36:44.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.768 "is_configured": false, 00:36:44.768 "data_offset": 0, 00:36:44.768 "data_size": 0 00:36:44.768 }, 00:36:44.768 { 00:36:44.768 "name": "BaseBdev3", 00:36:44.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.768 "is_configured": false, 00:36:44.768 "data_offset": 0, 00:36:44.768 "data_size": 0 00:36:44.768 }, 00:36:44.768 { 00:36:44.768 "name": "BaseBdev4", 00:36:44.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.768 "is_configured": false, 00:36:44.768 "data_offset": 0, 00:36:44.768 "data_size": 0 00:36:44.768 } 00:36:44.768 ] 00:36:44.768 }' 00:36:44.768 09:03:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:44.768 09:03:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:45.335 09:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:45.593 [2024-07-12 09:03:20.694714] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:45.593 [2024-07-12 09:03:20.694850] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:36:45.593 09:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:45.851 [2024-07-12 09:03:20.946777] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:45.851 [2024-07-12 09:03:20.946931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:45.851 [2024-07-12 09:03:20.947021] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:45.851 [2024-07-12 09:03:20.947097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:45.851 [2024-07-12 09:03:20.947286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:45.851 [2024-07-12 09:03:20.947370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:45.851 [2024-07-12 09:03:20.947475] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:45.852 [2024-07-12 09:03:20.947529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:45.852 09:03:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:46.109 [2024-07-12 09:03:21.227931] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:46.109 BaseBdev1 00:36:46.109 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:46.109 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:36:46.109 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:46.109 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:36:46.109 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:46.109 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:46.109 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:46.367 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:46.626 [ 00:36:46.626 { 00:36:46.626 "name": "BaseBdev1", 00:36:46.626 "aliases": [ 00:36:46.626 "972c9e8f-60c3-444f-ba6a-9b093e47ee1c" 00:36:46.626 ], 00:36:46.626 "product_name": "Malloc disk", 00:36:46.626 "block_size": 512, 00:36:46.626 "num_blocks": 65536, 00:36:46.626 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:46.626 "assigned_rate_limits": { 00:36:46.626 "rw_ios_per_sec": 0, 00:36:46.626 "rw_mbytes_per_sec": 0, 00:36:46.626 "r_mbytes_per_sec": 0, 00:36:46.626 "w_mbytes_per_sec": 0 00:36:46.626 }, 00:36:46.626 "claimed": true, 00:36:46.626 "claim_type": "exclusive_write", 00:36:46.626 "zoned": false, 00:36:46.626 "supported_io_types": { 00:36:46.626 "read": true, 00:36:46.626 "write": true, 00:36:46.626 "unmap": true, 00:36:46.626 "flush": true, 00:36:46.626 "reset": true, 00:36:46.626 "nvme_admin": false, 00:36:46.626 "nvme_io": false, 00:36:46.626 "nvme_io_md": false, 00:36:46.626 "write_zeroes": true, 00:36:46.626 "zcopy": true, 00:36:46.626 "get_zone_info": false, 00:36:46.626 "zone_management": false, 00:36:46.626 "zone_append": false, 00:36:46.626 "compare": false, 00:36:46.626 "compare_and_write": false, 00:36:46.626 "abort": true, 00:36:46.626 "seek_hole": false, 00:36:46.626 "seek_data": false, 00:36:46.626 "copy": true, 00:36:46.626 "nvme_iov_md": false 00:36:46.626 }, 00:36:46.626 "memory_domains": [ 00:36:46.626 { 00:36:46.626 "dma_device_id": "system", 00:36:46.626 "dma_device_type": 1 00:36:46.626 }, 00:36:46.626 { 00:36:46.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:46.626 "dma_device_type": 2 00:36:46.626 } 00:36:46.626 ], 00:36:46.626 "driver_specific": {} 00:36:46.626 } 00:36:46.626 ] 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.626 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:46.883 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:46.883 "name": "Existed_Raid", 00:36:46.883 "uuid": "219e4746-3860-495c-a57d-6186c8523d63", 00:36:46.883 "strip_size_kb": 64, 00:36:46.883 "state": "configuring", 00:36:46.883 "raid_level": "raid5f", 00:36:46.883 "superblock": true, 00:36:46.883 "num_base_bdevs": 4, 00:36:46.883 "num_base_bdevs_discovered": 1, 00:36:46.883 "num_base_bdevs_operational": 4, 00:36:46.883 "base_bdevs_list": [ 00:36:46.883 { 00:36:46.883 "name": "BaseBdev1", 00:36:46.883 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:46.883 "is_configured": true, 00:36:46.883 "data_offset": 2048, 00:36:46.883 "data_size": 63488 00:36:46.883 }, 00:36:46.883 { 00:36:46.883 "name": "BaseBdev2", 00:36:46.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.883 "is_configured": false, 00:36:46.883 "data_offset": 0, 00:36:46.883 "data_size": 0 00:36:46.883 }, 00:36:46.883 { 00:36:46.883 "name": "BaseBdev3", 00:36:46.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.883 "is_configured": false, 00:36:46.883 "data_offset": 0, 00:36:46.883 "data_size": 0 00:36:46.883 }, 00:36:46.883 { 00:36:46.883 "name": "BaseBdev4", 00:36:46.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.883 "is_configured": false, 00:36:46.883 "data_offset": 0, 00:36:46.883 "data_size": 0 00:36:46.883 } 00:36:46.883 ] 00:36:46.883 }' 00:36:46.883 09:03:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:46.883 09:03:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.448 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:47.706 [2024-07-12 09:03:22.680312] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:47.706 [2024-07-12 09:03:22.680532] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:36:47.706 [2024-07-12 09:03:22.880364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:47.706 [2024-07-12 09:03:22.882160] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:47.706 [2024-07-12 09:03:22.882380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:47.706 [2024-07-12 09:03:22.882482] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:47.706 [2024-07-12 09:03:22.882542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:47.706 [2024-07-12 09:03:22.882631] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:47.706 [2024-07-12 09:03:22.882691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.706 09:03:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:47.970 09:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:47.970 "name": "Existed_Raid", 00:36:47.970 "uuid": "2906a781-1cf3-4be2-b335-87398fda22a1", 00:36:47.970 "strip_size_kb": 64, 00:36:47.970 "state": "configuring", 00:36:47.970 "raid_level": "raid5f", 00:36:47.970 "superblock": true, 00:36:47.970 "num_base_bdevs": 4, 00:36:47.970 "num_base_bdevs_discovered": 1, 00:36:47.970 "num_base_bdevs_operational": 4, 00:36:47.970 "base_bdevs_list": [ 00:36:47.970 { 00:36:47.970 "name": "BaseBdev1", 00:36:47.970 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:47.970 "is_configured": true, 00:36:47.970 "data_offset": 2048, 00:36:47.970 "data_size": 63488 00:36:47.970 }, 00:36:47.970 { 00:36:47.970 "name": "BaseBdev2", 00:36:47.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.970 "is_configured": false, 00:36:47.970 "data_offset": 0, 00:36:47.970 "data_size": 0 00:36:47.970 }, 00:36:47.970 { 00:36:47.970 "name": "BaseBdev3", 00:36:47.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.970 "is_configured": false, 00:36:47.970 "data_offset": 0, 00:36:47.970 "data_size": 0 00:36:47.970 }, 00:36:47.970 { 00:36:47.970 "name": "BaseBdev4", 00:36:47.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.970 "is_configured": false, 00:36:47.970 "data_offset": 0, 00:36:47.970 "data_size": 0 00:36:47.970 } 00:36:47.970 ] 00:36:47.970 }' 00:36:47.970 09:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:47.970 09:03:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:48.535 09:03:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:49.100 [2024-07-12 09:03:24.014165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:49.100 BaseBdev2 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:49.100 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:49.665 [ 00:36:49.665 { 00:36:49.665 "name": "BaseBdev2", 00:36:49.665 "aliases": [ 00:36:49.665 "230bd94a-a5f6-46dd-b4db-5263b33e7895" 00:36:49.665 ], 00:36:49.665 "product_name": "Malloc disk", 00:36:49.665 "block_size": 512, 00:36:49.665 "num_blocks": 65536, 00:36:49.665 "uuid": "230bd94a-a5f6-46dd-b4db-5263b33e7895", 00:36:49.666 "assigned_rate_limits": { 00:36:49.666 "rw_ios_per_sec": 0, 00:36:49.666 "rw_mbytes_per_sec": 0, 00:36:49.666 "r_mbytes_per_sec": 0, 00:36:49.666 "w_mbytes_per_sec": 0 00:36:49.666 }, 00:36:49.666 "claimed": true, 00:36:49.666 "claim_type": "exclusive_write", 00:36:49.666 "zoned": false, 00:36:49.666 "supported_io_types": { 00:36:49.666 "read": true, 00:36:49.666 "write": true, 00:36:49.666 "unmap": true, 00:36:49.666 "flush": true, 00:36:49.666 "reset": true, 00:36:49.666 "nvme_admin": false, 00:36:49.666 "nvme_io": false, 00:36:49.666 "nvme_io_md": false, 00:36:49.666 "write_zeroes": true, 00:36:49.666 "zcopy": true, 00:36:49.666 "get_zone_info": false, 00:36:49.666 "zone_management": false, 00:36:49.666 "zone_append": false, 00:36:49.666 "compare": false, 00:36:49.666 "compare_and_write": false, 00:36:49.666 "abort": true, 00:36:49.666 "seek_hole": false, 00:36:49.666 "seek_data": false, 00:36:49.666 "copy": true, 00:36:49.666 "nvme_iov_md": false 00:36:49.666 }, 00:36:49.666 "memory_domains": [ 00:36:49.666 { 00:36:49.666 "dma_device_id": "system", 00:36:49.666 "dma_device_type": 1 00:36:49.666 }, 00:36:49.666 { 00:36:49.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:49.666 "dma_device_type": 2 00:36:49.666 } 00:36:49.666 ], 00:36:49.666 "driver_specific": {} 00:36:49.666 } 00:36:49.666 ] 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:49.666 "name": "Existed_Raid", 00:36:49.666 "uuid": "2906a781-1cf3-4be2-b335-87398fda22a1", 00:36:49.666 "strip_size_kb": 64, 00:36:49.666 "state": "configuring", 00:36:49.666 "raid_level": "raid5f", 00:36:49.666 "superblock": true, 00:36:49.666 "num_base_bdevs": 4, 00:36:49.666 "num_base_bdevs_discovered": 2, 00:36:49.666 "num_base_bdevs_operational": 4, 00:36:49.666 "base_bdevs_list": [ 00:36:49.666 { 00:36:49.666 "name": "BaseBdev1", 00:36:49.666 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:49.666 "is_configured": true, 00:36:49.666 "data_offset": 2048, 00:36:49.666 "data_size": 63488 00:36:49.666 }, 00:36:49.666 { 00:36:49.666 "name": "BaseBdev2", 00:36:49.666 "uuid": "230bd94a-a5f6-46dd-b4db-5263b33e7895", 00:36:49.666 "is_configured": true, 00:36:49.666 "data_offset": 2048, 00:36:49.666 "data_size": 63488 00:36:49.666 }, 00:36:49.666 { 00:36:49.666 "name": "BaseBdev3", 00:36:49.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.666 "is_configured": false, 00:36:49.666 "data_offset": 0, 00:36:49.666 "data_size": 0 00:36:49.666 }, 00:36:49.666 { 00:36:49.666 "name": "BaseBdev4", 00:36:49.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.666 "is_configured": false, 00:36:49.666 "data_offset": 0, 00:36:49.666 "data_size": 0 00:36:49.666 } 00:36:49.666 ] 00:36:49.666 }' 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:49.666 09:03:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.599 09:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:50.599 [2024-07-12 09:03:25.638607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:50.599 BaseBdev3 00:36:50.599 09:03:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:36:50.599 09:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:36:50.599 09:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:50.599 09:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:36:50.600 09:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:50.600 09:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:50.600 09:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:50.860 09:03:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:50.860 [ 00:36:50.860 { 00:36:50.860 "name": "BaseBdev3", 00:36:50.860 "aliases": [ 00:36:50.860 "6b4dcf34-bee7-445a-a638-edd11eccf8b2" 00:36:50.860 ], 00:36:50.860 "product_name": "Malloc disk", 00:36:50.860 "block_size": 512, 00:36:50.860 "num_blocks": 65536, 00:36:50.860 "uuid": "6b4dcf34-bee7-445a-a638-edd11eccf8b2", 00:36:50.860 "assigned_rate_limits": { 00:36:50.860 "rw_ios_per_sec": 0, 00:36:50.860 "rw_mbytes_per_sec": 0, 00:36:50.860 "r_mbytes_per_sec": 0, 00:36:50.860 "w_mbytes_per_sec": 0 00:36:50.860 }, 00:36:50.860 "claimed": true, 00:36:50.860 "claim_type": "exclusive_write", 00:36:50.860 "zoned": false, 00:36:50.860 "supported_io_types": { 00:36:50.860 "read": true, 00:36:50.860 "write": true, 00:36:50.860 "unmap": true, 00:36:50.860 "flush": true, 00:36:50.860 "reset": true, 00:36:50.860 "nvme_admin": false, 00:36:50.860 "nvme_io": false, 00:36:50.860 "nvme_io_md": false, 00:36:50.860 "write_zeroes": true, 00:36:50.860 "zcopy": true, 00:36:50.860 "get_zone_info": false, 00:36:50.860 "zone_management": false, 00:36:50.860 "zone_append": false, 00:36:50.860 "compare": false, 00:36:50.860 "compare_and_write": false, 00:36:50.860 "abort": true, 00:36:50.860 "seek_hole": false, 00:36:50.860 "seek_data": false, 00:36:50.860 "copy": true, 00:36:50.860 "nvme_iov_md": false 00:36:50.860 }, 00:36:50.860 "memory_domains": [ 00:36:50.860 { 00:36:50.860 "dma_device_id": "system", 00:36:50.860 "dma_device_type": 1 00:36:50.860 }, 00:36:50.860 { 00:36:50.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:50.860 "dma_device_type": 2 00:36:50.860 } 00:36:50.860 ], 00:36:50.860 "driver_specific": {} 00:36:50.860 } 00:36:50.860 ] 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:50.860 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:51.130 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:51.130 "name": "Existed_Raid", 00:36:51.130 "uuid": "2906a781-1cf3-4be2-b335-87398fda22a1", 00:36:51.130 "strip_size_kb": 64, 00:36:51.130 "state": "configuring", 00:36:51.130 "raid_level": "raid5f", 00:36:51.130 "superblock": true, 00:36:51.130 "num_base_bdevs": 4, 00:36:51.130 "num_base_bdevs_discovered": 3, 00:36:51.130 "num_base_bdevs_operational": 4, 00:36:51.130 "base_bdevs_list": [ 00:36:51.130 { 00:36:51.130 "name": "BaseBdev1", 00:36:51.130 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:51.130 "is_configured": true, 00:36:51.130 "data_offset": 2048, 00:36:51.130 "data_size": 63488 00:36:51.130 }, 00:36:51.130 { 00:36:51.130 "name": "BaseBdev2", 00:36:51.130 "uuid": "230bd94a-a5f6-46dd-b4db-5263b33e7895", 00:36:51.130 "is_configured": true, 00:36:51.130 "data_offset": 2048, 00:36:51.130 "data_size": 63488 00:36:51.130 }, 00:36:51.130 { 00:36:51.130 "name": "BaseBdev3", 00:36:51.130 "uuid": "6b4dcf34-bee7-445a-a638-edd11eccf8b2", 00:36:51.130 "is_configured": true, 00:36:51.130 "data_offset": 2048, 00:36:51.130 "data_size": 63488 00:36:51.130 }, 00:36:51.130 { 00:36:51.130 "name": "BaseBdev4", 00:36:51.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.130 "is_configured": false, 00:36:51.130 "data_offset": 0, 00:36:51.130 "data_size": 0 00:36:51.130 } 00:36:51.130 ] 00:36:51.130 }' 00:36:51.130 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:51.130 09:03:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.713 09:03:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:36:51.970 [2024-07-12 09:03:27.127186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:51.970 [2024-07-12 09:03:27.127449] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:36:51.970 [2024-07-12 09:03:27.127466] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:51.970 BaseBdev4 00:36:51.970 [2024-07-12 09:03:27.127582] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:36:51.970 [2024-07-12 09:03:27.133328] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:36:51.970 [2024-07-12 09:03:27.133353] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:36:51.970 [2024-07-12 09:03:27.133501] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.970 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:36:51.970 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:36:51.970 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:51.970 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:36:51.970 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:51.970 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:51.970 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:52.228 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:52.487 [ 00:36:52.487 { 00:36:52.487 "name": "BaseBdev4", 00:36:52.487 "aliases": [ 00:36:52.487 "45397662-48d2-4555-a76b-00cc05e9583f" 00:36:52.487 ], 00:36:52.487 "product_name": "Malloc disk", 00:36:52.487 "block_size": 512, 00:36:52.487 "num_blocks": 65536, 00:36:52.487 "uuid": "45397662-48d2-4555-a76b-00cc05e9583f", 00:36:52.487 "assigned_rate_limits": { 00:36:52.487 "rw_ios_per_sec": 0, 00:36:52.487 "rw_mbytes_per_sec": 0, 00:36:52.487 "r_mbytes_per_sec": 0, 00:36:52.487 "w_mbytes_per_sec": 0 00:36:52.487 }, 00:36:52.487 "claimed": true, 00:36:52.487 "claim_type": "exclusive_write", 00:36:52.487 "zoned": false, 00:36:52.487 "supported_io_types": { 00:36:52.487 "read": true, 00:36:52.487 "write": true, 00:36:52.487 "unmap": true, 00:36:52.487 "flush": true, 00:36:52.487 "reset": true, 00:36:52.487 "nvme_admin": false, 00:36:52.487 "nvme_io": false, 00:36:52.487 "nvme_io_md": false, 00:36:52.487 "write_zeroes": true, 00:36:52.487 "zcopy": true, 00:36:52.487 "get_zone_info": false, 00:36:52.487 "zone_management": false, 00:36:52.487 "zone_append": false, 00:36:52.487 "compare": false, 00:36:52.487 "compare_and_write": false, 00:36:52.487 "abort": true, 00:36:52.487 "seek_hole": false, 00:36:52.487 "seek_data": false, 00:36:52.487 "copy": true, 00:36:52.487 "nvme_iov_md": false 00:36:52.487 }, 00:36:52.487 "memory_domains": [ 00:36:52.487 { 00:36:52.487 "dma_device_id": "system", 00:36:52.487 "dma_device_type": 1 00:36:52.487 }, 00:36:52.487 { 00:36:52.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.487 "dma_device_type": 2 00:36:52.487 } 00:36:52.487 ], 00:36:52.487 "driver_specific": {} 00:36:52.487 } 00:36:52.487 ] 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:52.487 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:52.745 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:52.745 "name": "Existed_Raid", 00:36:52.745 "uuid": "2906a781-1cf3-4be2-b335-87398fda22a1", 00:36:52.745 "strip_size_kb": 64, 00:36:52.745 "state": "online", 00:36:52.745 "raid_level": "raid5f", 00:36:52.745 "superblock": true, 00:36:52.746 "num_base_bdevs": 4, 00:36:52.746 "num_base_bdevs_discovered": 4, 00:36:52.746 "num_base_bdevs_operational": 4, 00:36:52.746 "base_bdevs_list": [ 00:36:52.746 { 00:36:52.746 "name": "BaseBdev1", 00:36:52.746 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:52.746 "is_configured": true, 00:36:52.746 "data_offset": 2048, 00:36:52.746 "data_size": 63488 00:36:52.746 }, 00:36:52.746 { 00:36:52.746 "name": "BaseBdev2", 00:36:52.746 "uuid": "230bd94a-a5f6-46dd-b4db-5263b33e7895", 00:36:52.746 "is_configured": true, 00:36:52.746 "data_offset": 2048, 00:36:52.746 "data_size": 63488 00:36:52.746 }, 00:36:52.746 { 00:36:52.746 "name": "BaseBdev3", 00:36:52.746 "uuid": "6b4dcf34-bee7-445a-a638-edd11eccf8b2", 00:36:52.746 "is_configured": true, 00:36:52.746 "data_offset": 2048, 00:36:52.746 "data_size": 63488 00:36:52.746 }, 00:36:52.746 { 00:36:52.746 "name": "BaseBdev4", 00:36:52.746 "uuid": "45397662-48d2-4555-a76b-00cc05e9583f", 00:36:52.746 "is_configured": true, 00:36:52.746 "data_offset": 2048, 00:36:52.746 "data_size": 63488 00:36:52.746 } 00:36:52.746 ] 00:36:52.746 }' 00:36:52.746 09:03:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:52.746 09:03:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:53.313 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:53.571 [2024-07-12 09:03:28.583848] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:53.571 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:53.571 "name": "Existed_Raid", 00:36:53.571 "aliases": [ 00:36:53.571 "2906a781-1cf3-4be2-b335-87398fda22a1" 00:36:53.571 ], 00:36:53.571 "product_name": "Raid Volume", 00:36:53.571 "block_size": 512, 00:36:53.571 "num_blocks": 190464, 00:36:53.571 "uuid": "2906a781-1cf3-4be2-b335-87398fda22a1", 00:36:53.571 "assigned_rate_limits": { 00:36:53.571 "rw_ios_per_sec": 0, 00:36:53.571 "rw_mbytes_per_sec": 0, 00:36:53.571 "r_mbytes_per_sec": 0, 00:36:53.571 "w_mbytes_per_sec": 0 00:36:53.571 }, 00:36:53.571 "claimed": false, 00:36:53.571 "zoned": false, 00:36:53.571 "supported_io_types": { 00:36:53.571 "read": true, 00:36:53.571 "write": true, 00:36:53.571 "unmap": false, 00:36:53.571 "flush": false, 00:36:53.571 "reset": true, 00:36:53.571 "nvme_admin": false, 00:36:53.571 "nvme_io": false, 00:36:53.571 "nvme_io_md": false, 00:36:53.571 "write_zeroes": true, 00:36:53.571 "zcopy": false, 00:36:53.571 "get_zone_info": false, 00:36:53.572 "zone_management": false, 00:36:53.572 "zone_append": false, 00:36:53.572 "compare": false, 00:36:53.572 "compare_and_write": false, 00:36:53.572 "abort": false, 00:36:53.572 "seek_hole": false, 00:36:53.572 "seek_data": false, 00:36:53.572 "copy": false, 00:36:53.572 "nvme_iov_md": false 00:36:53.572 }, 00:36:53.572 "driver_specific": { 00:36:53.572 "raid": { 00:36:53.572 "uuid": "2906a781-1cf3-4be2-b335-87398fda22a1", 00:36:53.572 "strip_size_kb": 64, 00:36:53.572 "state": "online", 00:36:53.572 "raid_level": "raid5f", 00:36:53.572 "superblock": true, 00:36:53.572 "num_base_bdevs": 4, 00:36:53.572 "num_base_bdevs_discovered": 4, 00:36:53.572 "num_base_bdevs_operational": 4, 00:36:53.572 "base_bdevs_list": [ 00:36:53.572 { 00:36:53.572 "name": "BaseBdev1", 00:36:53.572 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:53.572 "is_configured": true, 00:36:53.572 "data_offset": 2048, 00:36:53.572 "data_size": 63488 00:36:53.572 }, 00:36:53.572 { 00:36:53.572 "name": "BaseBdev2", 00:36:53.572 "uuid": "230bd94a-a5f6-46dd-b4db-5263b33e7895", 00:36:53.572 "is_configured": true, 00:36:53.572 "data_offset": 2048, 00:36:53.572 "data_size": 63488 00:36:53.572 }, 00:36:53.572 { 00:36:53.572 "name": "BaseBdev3", 00:36:53.572 "uuid": "6b4dcf34-bee7-445a-a638-edd11eccf8b2", 00:36:53.572 "is_configured": true, 00:36:53.572 "data_offset": 2048, 00:36:53.572 "data_size": 63488 00:36:53.572 }, 00:36:53.572 { 00:36:53.572 "name": "BaseBdev4", 00:36:53.572 "uuid": "45397662-48d2-4555-a76b-00cc05e9583f", 00:36:53.572 "is_configured": true, 00:36:53.572 "data_offset": 2048, 00:36:53.572 "data_size": 63488 00:36:53.572 } 00:36:53.572 ] 00:36:53.572 } 00:36:53.572 } 00:36:53.572 }' 00:36:53.572 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:53.572 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:53.572 BaseBdev2 00:36:53.572 BaseBdev3 00:36:53.572 BaseBdev4' 00:36:53.572 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:53.572 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:53.572 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:53.831 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:53.831 "name": "BaseBdev1", 00:36:53.831 "aliases": [ 00:36:53.831 "972c9e8f-60c3-444f-ba6a-9b093e47ee1c" 00:36:53.831 ], 00:36:53.831 "product_name": "Malloc disk", 00:36:53.831 "block_size": 512, 00:36:53.831 "num_blocks": 65536, 00:36:53.831 "uuid": "972c9e8f-60c3-444f-ba6a-9b093e47ee1c", 00:36:53.831 "assigned_rate_limits": { 00:36:53.831 "rw_ios_per_sec": 0, 00:36:53.831 "rw_mbytes_per_sec": 0, 00:36:53.831 "r_mbytes_per_sec": 0, 00:36:53.831 "w_mbytes_per_sec": 0 00:36:53.831 }, 00:36:53.831 "claimed": true, 00:36:53.831 "claim_type": "exclusive_write", 00:36:53.831 "zoned": false, 00:36:53.831 "supported_io_types": { 00:36:53.831 "read": true, 00:36:53.831 "write": true, 00:36:53.831 "unmap": true, 00:36:53.831 "flush": true, 00:36:53.831 "reset": true, 00:36:53.831 "nvme_admin": false, 00:36:53.831 "nvme_io": false, 00:36:53.831 "nvme_io_md": false, 00:36:53.831 "write_zeroes": true, 00:36:53.831 "zcopy": true, 00:36:53.831 "get_zone_info": false, 00:36:53.831 "zone_management": false, 00:36:53.831 "zone_append": false, 00:36:53.831 "compare": false, 00:36:53.831 "compare_and_write": false, 00:36:53.831 "abort": true, 00:36:53.831 "seek_hole": false, 00:36:53.831 "seek_data": false, 00:36:53.831 "copy": true, 00:36:53.831 "nvme_iov_md": false 00:36:53.831 }, 00:36:53.831 "memory_domains": [ 00:36:53.831 { 00:36:53.831 "dma_device_id": "system", 00:36:53.831 "dma_device_type": 1 00:36:53.831 }, 00:36:53.831 { 00:36:53.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:53.831 "dma_device_type": 2 00:36:53.831 } 00:36:53.831 ], 00:36:53.831 "driver_specific": {} 00:36:53.831 }' 00:36:53.831 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:53.831 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:53.831 09:03:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:53.831 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.090 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.090 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:54.090 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.090 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.090 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:54.090 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.090 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.349 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:54.349 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:54.349 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:54.349 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:54.349 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:54.349 "name": "BaseBdev2", 00:36:54.349 "aliases": [ 00:36:54.349 "230bd94a-a5f6-46dd-b4db-5263b33e7895" 00:36:54.349 ], 00:36:54.349 "product_name": "Malloc disk", 00:36:54.349 "block_size": 512, 00:36:54.349 "num_blocks": 65536, 00:36:54.349 "uuid": "230bd94a-a5f6-46dd-b4db-5263b33e7895", 00:36:54.349 "assigned_rate_limits": { 00:36:54.349 "rw_ios_per_sec": 0, 00:36:54.349 "rw_mbytes_per_sec": 0, 00:36:54.349 "r_mbytes_per_sec": 0, 00:36:54.349 "w_mbytes_per_sec": 0 00:36:54.349 }, 00:36:54.349 "claimed": true, 00:36:54.349 "claim_type": "exclusive_write", 00:36:54.349 "zoned": false, 00:36:54.349 "supported_io_types": { 00:36:54.349 "read": true, 00:36:54.349 "write": true, 00:36:54.349 "unmap": true, 00:36:54.349 "flush": true, 00:36:54.349 "reset": true, 00:36:54.349 "nvme_admin": false, 00:36:54.349 "nvme_io": false, 00:36:54.349 "nvme_io_md": false, 00:36:54.349 "write_zeroes": true, 00:36:54.349 "zcopy": true, 00:36:54.349 "get_zone_info": false, 00:36:54.349 "zone_management": false, 00:36:54.349 "zone_append": false, 00:36:54.349 "compare": false, 00:36:54.349 "compare_and_write": false, 00:36:54.349 "abort": true, 00:36:54.349 "seek_hole": false, 00:36:54.349 "seek_data": false, 00:36:54.349 "copy": true, 00:36:54.349 "nvme_iov_md": false 00:36:54.349 }, 00:36:54.349 "memory_domains": [ 00:36:54.349 { 00:36:54.349 "dma_device_id": "system", 00:36:54.349 "dma_device_type": 1 00:36:54.349 }, 00:36:54.349 { 00:36:54.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:54.349 "dma_device_type": 2 00:36:54.349 } 00:36:54.349 ], 00:36:54.349 "driver_specific": {} 00:36:54.349 }' 00:36:54.349 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:54.608 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:54.608 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:54.608 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.608 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:54.608 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:54.608 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:54.866 09:03:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:36:55.124 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:55.124 "name": "BaseBdev3", 00:36:55.125 "aliases": [ 00:36:55.125 "6b4dcf34-bee7-445a-a638-edd11eccf8b2" 00:36:55.125 ], 00:36:55.125 "product_name": "Malloc disk", 00:36:55.125 "block_size": 512, 00:36:55.125 "num_blocks": 65536, 00:36:55.125 "uuid": "6b4dcf34-bee7-445a-a638-edd11eccf8b2", 00:36:55.125 "assigned_rate_limits": { 00:36:55.125 "rw_ios_per_sec": 0, 00:36:55.125 "rw_mbytes_per_sec": 0, 00:36:55.125 "r_mbytes_per_sec": 0, 00:36:55.125 "w_mbytes_per_sec": 0 00:36:55.125 }, 00:36:55.125 "claimed": true, 00:36:55.125 "claim_type": "exclusive_write", 00:36:55.125 "zoned": false, 00:36:55.125 "supported_io_types": { 00:36:55.125 "read": true, 00:36:55.125 "write": true, 00:36:55.125 "unmap": true, 00:36:55.125 "flush": true, 00:36:55.125 "reset": true, 00:36:55.125 "nvme_admin": false, 00:36:55.125 "nvme_io": false, 00:36:55.125 "nvme_io_md": false, 00:36:55.125 "write_zeroes": true, 00:36:55.125 "zcopy": true, 00:36:55.125 "get_zone_info": false, 00:36:55.125 "zone_management": false, 00:36:55.125 "zone_append": false, 00:36:55.125 "compare": false, 00:36:55.125 "compare_and_write": false, 00:36:55.125 "abort": true, 00:36:55.125 "seek_hole": false, 00:36:55.125 "seek_data": false, 00:36:55.125 "copy": true, 00:36:55.125 "nvme_iov_md": false 00:36:55.125 }, 00:36:55.125 "memory_domains": [ 00:36:55.125 { 00:36:55.125 "dma_device_id": "system", 00:36:55.125 "dma_device_type": 1 00:36:55.125 }, 00:36:55.125 { 00:36:55.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:55.125 "dma_device_type": 2 00:36:55.125 } 00:36:55.125 ], 00:36:55.125 "driver_specific": {} 00:36:55.125 }' 00:36:55.125 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:55.125 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:55.125 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:55.125 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:55.383 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:55.383 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:55.383 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:55.383 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:55.383 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:55.383 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:55.383 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:55.642 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:55.642 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:55.642 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:36:55.642 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:55.642 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:55.642 "name": "BaseBdev4", 00:36:55.642 "aliases": [ 00:36:55.642 "45397662-48d2-4555-a76b-00cc05e9583f" 00:36:55.642 ], 00:36:55.642 "product_name": "Malloc disk", 00:36:55.642 "block_size": 512, 00:36:55.642 "num_blocks": 65536, 00:36:55.642 "uuid": "45397662-48d2-4555-a76b-00cc05e9583f", 00:36:55.642 "assigned_rate_limits": { 00:36:55.642 "rw_ios_per_sec": 0, 00:36:55.642 "rw_mbytes_per_sec": 0, 00:36:55.642 "r_mbytes_per_sec": 0, 00:36:55.642 "w_mbytes_per_sec": 0 00:36:55.642 }, 00:36:55.642 "claimed": true, 00:36:55.642 "claim_type": "exclusive_write", 00:36:55.642 "zoned": false, 00:36:55.642 "supported_io_types": { 00:36:55.642 "read": true, 00:36:55.642 "write": true, 00:36:55.642 "unmap": true, 00:36:55.642 "flush": true, 00:36:55.642 "reset": true, 00:36:55.642 "nvme_admin": false, 00:36:55.642 "nvme_io": false, 00:36:55.642 "nvme_io_md": false, 00:36:55.642 "write_zeroes": true, 00:36:55.642 "zcopy": true, 00:36:55.642 "get_zone_info": false, 00:36:55.642 "zone_management": false, 00:36:55.642 "zone_append": false, 00:36:55.642 "compare": false, 00:36:55.642 "compare_and_write": false, 00:36:55.642 "abort": true, 00:36:55.642 "seek_hole": false, 00:36:55.642 "seek_data": false, 00:36:55.642 "copy": true, 00:36:55.642 "nvme_iov_md": false 00:36:55.642 }, 00:36:55.642 "memory_domains": [ 00:36:55.642 { 00:36:55.642 "dma_device_id": "system", 00:36:55.642 "dma_device_type": 1 00:36:55.642 }, 00:36:55.642 { 00:36:55.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:55.642 "dma_device_type": 2 00:36:55.642 } 00:36:55.642 ], 00:36:55.642 "driver_specific": {} 00:36:55.642 }' 00:36:55.642 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:55.901 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:55.901 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:36:55.901 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:55.901 09:03:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:55.901 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:55.901 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:55.901 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:56.160 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:56.160 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:56.160 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:56.160 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:56.160 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:56.418 [2024-07-12 09:03:31.480253] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.418 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:56.677 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:56.677 "name": "Existed_Raid", 00:36:56.677 "uuid": "2906a781-1cf3-4be2-b335-87398fda22a1", 00:36:56.677 "strip_size_kb": 64, 00:36:56.677 "state": "online", 00:36:56.677 "raid_level": "raid5f", 00:36:56.677 "superblock": true, 00:36:56.677 "num_base_bdevs": 4, 00:36:56.677 "num_base_bdevs_discovered": 3, 00:36:56.677 "num_base_bdevs_operational": 3, 00:36:56.677 "base_bdevs_list": [ 00:36:56.677 { 00:36:56.677 "name": null, 00:36:56.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.677 "is_configured": false, 00:36:56.677 "data_offset": 2048, 00:36:56.677 "data_size": 63488 00:36:56.677 }, 00:36:56.677 { 00:36:56.677 "name": "BaseBdev2", 00:36:56.677 "uuid": "230bd94a-a5f6-46dd-b4db-5263b33e7895", 00:36:56.677 "is_configured": true, 00:36:56.677 "data_offset": 2048, 00:36:56.677 "data_size": 63488 00:36:56.677 }, 00:36:56.677 { 00:36:56.677 "name": "BaseBdev3", 00:36:56.677 "uuid": "6b4dcf34-bee7-445a-a638-edd11eccf8b2", 00:36:56.677 "is_configured": true, 00:36:56.677 "data_offset": 2048, 00:36:56.677 "data_size": 63488 00:36:56.677 }, 00:36:56.677 { 00:36:56.677 "name": "BaseBdev4", 00:36:56.677 "uuid": "45397662-48d2-4555-a76b-00cc05e9583f", 00:36:56.677 "is_configured": true, 00:36:56.677 "data_offset": 2048, 00:36:56.677 "data_size": 63488 00:36:56.677 } 00:36:56.677 ] 00:36:56.677 }' 00:36:56.677 09:03:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:56.677 09:03:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:57.244 09:03:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:57.244 09:03:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:57.244 09:03:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:57.244 09:03:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:57.503 09:03:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:57.503 09:03:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:57.503 09:03:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:57.761 [2024-07-12 09:03:32.897145] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:57.761 [2024-07-12 09:03:32.897313] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:58.020 [2024-07-12 09:03:33.003941] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:58.020 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:58.020 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:58.020 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.020 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:58.020 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:58.020 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:58.020 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:58.278 [2024-07-12 09:03:33.440635] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:58.538 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:58.538 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:58.538 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.538 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:58.538 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:58.538 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:58.538 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:36:58.796 [2024-07-12 09:03:33.876035] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:58.796 [2024-07-12 09:03:33.876098] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:36:58.797 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:58.797 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:58.797 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.797 09:03:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:59.055 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:59.055 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:59.055 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:36:59.055 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:36:59.055 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:36:59.055 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:59.314 BaseBdev2 00:36:59.314 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:36:59.314 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:36:59.314 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:59.314 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:36:59.314 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:59.314 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:59.314 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:59.582 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:59.840 [ 00:36:59.840 { 00:36:59.840 "name": "BaseBdev2", 00:36:59.840 "aliases": [ 00:36:59.840 "63b911f0-a63f-45a3-8f9d-7f0240145af5" 00:36:59.840 ], 00:36:59.840 "product_name": "Malloc disk", 00:36:59.840 "block_size": 512, 00:36:59.840 "num_blocks": 65536, 00:36:59.840 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:36:59.840 "assigned_rate_limits": { 00:36:59.840 "rw_ios_per_sec": 0, 00:36:59.840 "rw_mbytes_per_sec": 0, 00:36:59.840 "r_mbytes_per_sec": 0, 00:36:59.840 "w_mbytes_per_sec": 0 00:36:59.840 }, 00:36:59.840 "claimed": false, 00:36:59.840 "zoned": false, 00:36:59.840 "supported_io_types": { 00:36:59.840 "read": true, 00:36:59.840 "write": true, 00:36:59.840 "unmap": true, 00:36:59.840 "flush": true, 00:36:59.840 "reset": true, 00:36:59.840 "nvme_admin": false, 00:36:59.840 "nvme_io": false, 00:36:59.840 "nvme_io_md": false, 00:36:59.840 "write_zeroes": true, 00:36:59.840 "zcopy": true, 00:36:59.840 "get_zone_info": false, 00:36:59.840 "zone_management": false, 00:36:59.840 "zone_append": false, 00:36:59.840 "compare": false, 00:36:59.840 "compare_and_write": false, 00:36:59.840 "abort": true, 00:36:59.840 "seek_hole": false, 00:36:59.840 "seek_data": false, 00:36:59.840 "copy": true, 00:36:59.840 "nvme_iov_md": false 00:36:59.840 }, 00:36:59.840 "memory_domains": [ 00:36:59.840 { 00:36:59.841 "dma_device_id": "system", 00:36:59.841 "dma_device_type": 1 00:36:59.841 }, 00:36:59.841 { 00:36:59.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.841 "dma_device_type": 2 00:36:59.841 } 00:36:59.841 ], 00:36:59.841 "driver_specific": {} 00:36:59.841 } 00:36:59.841 ] 00:36:59.841 09:03:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:36:59.841 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:36:59.841 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:36:59.841 09:03:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:59.841 BaseBdev3 00:36:59.841 09:03:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:36:59.841 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:36:59.841 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:59.841 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:36:59.841 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:59.841 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:59.841 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:00.098 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:00.357 [ 00:37:00.357 { 00:37:00.357 "name": "BaseBdev3", 00:37:00.357 "aliases": [ 00:37:00.357 "68ad1206-8434-40e1-80d4-87af9c1aa350" 00:37:00.357 ], 00:37:00.357 "product_name": "Malloc disk", 00:37:00.357 "block_size": 512, 00:37:00.357 "num_blocks": 65536, 00:37:00.357 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:00.357 "assigned_rate_limits": { 00:37:00.357 "rw_ios_per_sec": 0, 00:37:00.357 "rw_mbytes_per_sec": 0, 00:37:00.357 "r_mbytes_per_sec": 0, 00:37:00.357 "w_mbytes_per_sec": 0 00:37:00.357 }, 00:37:00.357 "claimed": false, 00:37:00.357 "zoned": false, 00:37:00.357 "supported_io_types": { 00:37:00.357 "read": true, 00:37:00.357 "write": true, 00:37:00.357 "unmap": true, 00:37:00.357 "flush": true, 00:37:00.357 "reset": true, 00:37:00.357 "nvme_admin": false, 00:37:00.357 "nvme_io": false, 00:37:00.357 "nvme_io_md": false, 00:37:00.357 "write_zeroes": true, 00:37:00.357 "zcopy": true, 00:37:00.357 "get_zone_info": false, 00:37:00.357 "zone_management": false, 00:37:00.357 "zone_append": false, 00:37:00.357 "compare": false, 00:37:00.357 "compare_and_write": false, 00:37:00.357 "abort": true, 00:37:00.357 "seek_hole": false, 00:37:00.357 "seek_data": false, 00:37:00.357 "copy": true, 00:37:00.357 "nvme_iov_md": false 00:37:00.357 }, 00:37:00.357 "memory_domains": [ 00:37:00.357 { 00:37:00.357 "dma_device_id": "system", 00:37:00.357 "dma_device_type": 1 00:37:00.357 }, 00:37:00.357 { 00:37:00.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:00.357 "dma_device_type": 2 00:37:00.357 } 00:37:00.357 ], 00:37:00.357 "driver_specific": {} 00:37:00.357 } 00:37:00.357 ] 00:37:00.357 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:37:00.357 09:03:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:00.357 09:03:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:00.357 09:03:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:37:00.616 BaseBdev4 00:37:00.616 09:03:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:37:00.616 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:37:00.616 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:00.616 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:37:00.616 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:00.616 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:00.616 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:00.874 09:03:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:01.133 [ 00:37:01.133 { 00:37:01.133 "name": "BaseBdev4", 00:37:01.133 "aliases": [ 00:37:01.133 "51434cf1-058e-4dc2-938d-f171213a5c3f" 00:37:01.133 ], 00:37:01.133 "product_name": "Malloc disk", 00:37:01.133 "block_size": 512, 00:37:01.133 "num_blocks": 65536, 00:37:01.133 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:01.133 "assigned_rate_limits": { 00:37:01.133 "rw_ios_per_sec": 0, 00:37:01.133 "rw_mbytes_per_sec": 0, 00:37:01.133 "r_mbytes_per_sec": 0, 00:37:01.133 "w_mbytes_per_sec": 0 00:37:01.133 }, 00:37:01.133 "claimed": false, 00:37:01.133 "zoned": false, 00:37:01.133 "supported_io_types": { 00:37:01.133 "read": true, 00:37:01.133 "write": true, 00:37:01.133 "unmap": true, 00:37:01.133 "flush": true, 00:37:01.133 "reset": true, 00:37:01.133 "nvme_admin": false, 00:37:01.133 "nvme_io": false, 00:37:01.133 "nvme_io_md": false, 00:37:01.133 "write_zeroes": true, 00:37:01.133 "zcopy": true, 00:37:01.133 "get_zone_info": false, 00:37:01.133 "zone_management": false, 00:37:01.133 "zone_append": false, 00:37:01.133 "compare": false, 00:37:01.133 "compare_and_write": false, 00:37:01.133 "abort": true, 00:37:01.133 "seek_hole": false, 00:37:01.133 "seek_data": false, 00:37:01.133 "copy": true, 00:37:01.133 "nvme_iov_md": false 00:37:01.133 }, 00:37:01.133 "memory_domains": [ 00:37:01.133 { 00:37:01.133 "dma_device_id": "system", 00:37:01.133 "dma_device_type": 1 00:37:01.133 }, 00:37:01.133 { 00:37:01.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:01.133 "dma_device_type": 2 00:37:01.133 } 00:37:01.133 ], 00:37:01.133 "driver_specific": {} 00:37:01.133 } 00:37:01.133 ] 00:37:01.133 09:03:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:37:01.133 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:37:01.133 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:37:01.133 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:01.393 [2024-07-12 09:03:36.375833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:01.393 [2024-07-12 09:03:36.375894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:01.393 [2024-07-12 09:03:36.375916] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:01.393 [2024-07-12 09:03:36.377747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:01.393 [2024-07-12 09:03:36.377806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:01.393 "name": "Existed_Raid", 00:37:01.393 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:01.393 "strip_size_kb": 64, 00:37:01.393 "state": "configuring", 00:37:01.393 "raid_level": "raid5f", 00:37:01.393 "superblock": true, 00:37:01.393 "num_base_bdevs": 4, 00:37:01.393 "num_base_bdevs_discovered": 3, 00:37:01.393 "num_base_bdevs_operational": 4, 00:37:01.393 "base_bdevs_list": [ 00:37:01.393 { 00:37:01.393 "name": "BaseBdev1", 00:37:01.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:01.393 "is_configured": false, 00:37:01.393 "data_offset": 0, 00:37:01.393 "data_size": 0 00:37:01.393 }, 00:37:01.393 { 00:37:01.393 "name": "BaseBdev2", 00:37:01.393 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:01.393 "is_configured": true, 00:37:01.393 "data_offset": 2048, 00:37:01.393 "data_size": 63488 00:37:01.393 }, 00:37:01.393 { 00:37:01.393 "name": "BaseBdev3", 00:37:01.393 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:01.393 "is_configured": true, 00:37:01.393 "data_offset": 2048, 00:37:01.393 "data_size": 63488 00:37:01.393 }, 00:37:01.393 { 00:37:01.393 "name": "BaseBdev4", 00:37:01.393 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:01.393 "is_configured": true, 00:37:01.393 "data_offset": 2048, 00:37:01.393 "data_size": 63488 00:37:01.393 } 00:37:01.393 ] 00:37:01.393 }' 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:01.393 09:03:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:37:02.328 [2024-07-12 09:03:37.388724] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.328 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:02.587 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.587 "name": "Existed_Raid", 00:37:02.587 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:02.587 "strip_size_kb": 64, 00:37:02.587 "state": "configuring", 00:37:02.587 "raid_level": "raid5f", 00:37:02.587 "superblock": true, 00:37:02.587 "num_base_bdevs": 4, 00:37:02.587 "num_base_bdevs_discovered": 2, 00:37:02.587 "num_base_bdevs_operational": 4, 00:37:02.587 "base_bdevs_list": [ 00:37:02.587 { 00:37:02.587 "name": "BaseBdev1", 00:37:02.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.587 "is_configured": false, 00:37:02.587 "data_offset": 0, 00:37:02.587 "data_size": 0 00:37:02.587 }, 00:37:02.587 { 00:37:02.587 "name": null, 00:37:02.587 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:02.587 "is_configured": false, 00:37:02.587 "data_offset": 2048, 00:37:02.587 "data_size": 63488 00:37:02.587 }, 00:37:02.587 { 00:37:02.587 "name": "BaseBdev3", 00:37:02.587 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:02.587 "is_configured": true, 00:37:02.587 "data_offset": 2048, 00:37:02.587 "data_size": 63488 00:37:02.587 }, 00:37:02.587 { 00:37:02.587 "name": "BaseBdev4", 00:37:02.587 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:02.587 "is_configured": true, 00:37:02.587 "data_offset": 2048, 00:37:02.587 "data_size": 63488 00:37:02.587 } 00:37:02.587 ] 00:37:02.587 }' 00:37:02.587 09:03:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.587 09:03:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:03.155 09:03:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:03.155 09:03:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:03.413 09:03:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:37:03.414 09:03:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:03.672 [2024-07-12 09:03:38.728060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:03.672 BaseBdev1 00:37:03.672 09:03:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:37:03.672 09:03:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:37:03.672 09:03:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:03.672 09:03:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:37:03.672 09:03:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:03.672 09:03:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:03.672 09:03:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:03.931 09:03:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:04.190 [ 00:37:04.190 { 00:37:04.190 "name": "BaseBdev1", 00:37:04.190 "aliases": [ 00:37:04.190 "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf" 00:37:04.190 ], 00:37:04.190 "product_name": "Malloc disk", 00:37:04.190 "block_size": 512, 00:37:04.190 "num_blocks": 65536, 00:37:04.190 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:04.190 "assigned_rate_limits": { 00:37:04.190 "rw_ios_per_sec": 0, 00:37:04.190 "rw_mbytes_per_sec": 0, 00:37:04.190 "r_mbytes_per_sec": 0, 00:37:04.190 "w_mbytes_per_sec": 0 00:37:04.190 }, 00:37:04.190 "claimed": true, 00:37:04.190 "claim_type": "exclusive_write", 00:37:04.190 "zoned": false, 00:37:04.190 "supported_io_types": { 00:37:04.190 "read": true, 00:37:04.190 "write": true, 00:37:04.190 "unmap": true, 00:37:04.190 "flush": true, 00:37:04.190 "reset": true, 00:37:04.190 "nvme_admin": false, 00:37:04.190 "nvme_io": false, 00:37:04.190 "nvme_io_md": false, 00:37:04.190 "write_zeroes": true, 00:37:04.190 "zcopy": true, 00:37:04.190 "get_zone_info": false, 00:37:04.190 "zone_management": false, 00:37:04.190 "zone_append": false, 00:37:04.190 "compare": false, 00:37:04.190 "compare_and_write": false, 00:37:04.190 "abort": true, 00:37:04.190 "seek_hole": false, 00:37:04.190 "seek_data": false, 00:37:04.190 "copy": true, 00:37:04.190 "nvme_iov_md": false 00:37:04.190 }, 00:37:04.190 "memory_domains": [ 00:37:04.190 { 00:37:04.190 "dma_device_id": "system", 00:37:04.190 "dma_device_type": 1 00:37:04.190 }, 00:37:04.190 { 00:37:04.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.190 "dma_device_type": 2 00:37:04.190 } 00:37:04.190 ], 00:37:04.190 "driver_specific": {} 00:37:04.190 } 00:37:04.190 ] 00:37:04.190 09:03:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:37:04.190 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:04.190 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.191 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:04.449 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:04.449 "name": "Existed_Raid", 00:37:04.449 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:04.449 "strip_size_kb": 64, 00:37:04.449 "state": "configuring", 00:37:04.449 "raid_level": "raid5f", 00:37:04.449 "superblock": true, 00:37:04.449 "num_base_bdevs": 4, 00:37:04.449 "num_base_bdevs_discovered": 3, 00:37:04.449 "num_base_bdevs_operational": 4, 00:37:04.449 "base_bdevs_list": [ 00:37:04.449 { 00:37:04.449 "name": "BaseBdev1", 00:37:04.449 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:04.449 "is_configured": true, 00:37:04.449 "data_offset": 2048, 00:37:04.449 "data_size": 63488 00:37:04.449 }, 00:37:04.449 { 00:37:04.449 "name": null, 00:37:04.449 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:04.449 "is_configured": false, 00:37:04.449 "data_offset": 2048, 00:37:04.449 "data_size": 63488 00:37:04.449 }, 00:37:04.449 { 00:37:04.449 "name": "BaseBdev3", 00:37:04.449 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:04.449 "is_configured": true, 00:37:04.449 "data_offset": 2048, 00:37:04.449 "data_size": 63488 00:37:04.450 }, 00:37:04.450 { 00:37:04.450 "name": "BaseBdev4", 00:37:04.450 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:04.450 "is_configured": true, 00:37:04.450 "data_offset": 2048, 00:37:04.450 "data_size": 63488 00:37:04.450 } 00:37:04.450 ] 00:37:04.450 }' 00:37:04.450 09:03:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:04.450 09:03:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.017 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.017 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:05.275 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:37:05.275 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:37:05.534 [2024-07-12 09:03:40.568437] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.534 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:05.793 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:05.793 "name": "Existed_Raid", 00:37:05.793 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:05.793 "strip_size_kb": 64, 00:37:05.793 "state": "configuring", 00:37:05.793 "raid_level": "raid5f", 00:37:05.793 "superblock": true, 00:37:05.793 "num_base_bdevs": 4, 00:37:05.793 "num_base_bdevs_discovered": 2, 00:37:05.793 "num_base_bdevs_operational": 4, 00:37:05.793 "base_bdevs_list": [ 00:37:05.793 { 00:37:05.793 "name": "BaseBdev1", 00:37:05.793 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:05.793 "is_configured": true, 00:37:05.793 "data_offset": 2048, 00:37:05.793 "data_size": 63488 00:37:05.793 }, 00:37:05.793 { 00:37:05.793 "name": null, 00:37:05.793 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:05.793 "is_configured": false, 00:37:05.793 "data_offset": 2048, 00:37:05.793 "data_size": 63488 00:37:05.793 }, 00:37:05.793 { 00:37:05.793 "name": null, 00:37:05.793 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:05.793 "is_configured": false, 00:37:05.793 "data_offset": 2048, 00:37:05.793 "data_size": 63488 00:37:05.793 }, 00:37:05.793 { 00:37:05.793 "name": "BaseBdev4", 00:37:05.793 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:05.793 "is_configured": true, 00:37:05.793 "data_offset": 2048, 00:37:05.793 "data_size": 63488 00:37:05.793 } 00:37:05.793 ] 00:37:05.793 }' 00:37:05.793 09:03:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:05.793 09:03:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.359 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.359 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:06.618 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:37:06.618 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:06.877 [2024-07-12 09:03:41.909230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.877 09:03:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:07.135 09:03:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:07.135 "name": "Existed_Raid", 00:37:07.135 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:07.135 "strip_size_kb": 64, 00:37:07.135 "state": "configuring", 00:37:07.135 "raid_level": "raid5f", 00:37:07.135 "superblock": true, 00:37:07.135 "num_base_bdevs": 4, 00:37:07.135 "num_base_bdevs_discovered": 3, 00:37:07.135 "num_base_bdevs_operational": 4, 00:37:07.135 "base_bdevs_list": [ 00:37:07.135 { 00:37:07.135 "name": "BaseBdev1", 00:37:07.135 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:07.135 "is_configured": true, 00:37:07.135 "data_offset": 2048, 00:37:07.135 "data_size": 63488 00:37:07.135 }, 00:37:07.135 { 00:37:07.135 "name": null, 00:37:07.135 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:07.135 "is_configured": false, 00:37:07.135 "data_offset": 2048, 00:37:07.135 "data_size": 63488 00:37:07.135 }, 00:37:07.135 { 00:37:07.135 "name": "BaseBdev3", 00:37:07.135 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:07.135 "is_configured": true, 00:37:07.135 "data_offset": 2048, 00:37:07.135 "data_size": 63488 00:37:07.135 }, 00:37:07.135 { 00:37:07.135 "name": "BaseBdev4", 00:37:07.135 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:07.135 "is_configured": true, 00:37:07.135 "data_offset": 2048, 00:37:07.135 "data_size": 63488 00:37:07.135 } 00:37:07.135 ] 00:37:07.135 }' 00:37:07.135 09:03:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:07.135 09:03:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.702 09:03:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.702 09:03:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:07.960 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:37:07.960 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:08.217 [2024-07-12 09:03:43.269454] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:08.217 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.475 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:08.475 "name": "Existed_Raid", 00:37:08.475 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:08.475 "strip_size_kb": 64, 00:37:08.475 "state": "configuring", 00:37:08.475 "raid_level": "raid5f", 00:37:08.475 "superblock": true, 00:37:08.475 "num_base_bdevs": 4, 00:37:08.475 "num_base_bdevs_discovered": 2, 00:37:08.475 "num_base_bdevs_operational": 4, 00:37:08.475 "base_bdevs_list": [ 00:37:08.475 { 00:37:08.475 "name": null, 00:37:08.475 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:08.475 "is_configured": false, 00:37:08.475 "data_offset": 2048, 00:37:08.475 "data_size": 63488 00:37:08.475 }, 00:37:08.475 { 00:37:08.475 "name": null, 00:37:08.475 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:08.475 "is_configured": false, 00:37:08.475 "data_offset": 2048, 00:37:08.475 "data_size": 63488 00:37:08.475 }, 00:37:08.475 { 00:37:08.475 "name": "BaseBdev3", 00:37:08.475 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:08.475 "is_configured": true, 00:37:08.475 "data_offset": 2048, 00:37:08.475 "data_size": 63488 00:37:08.475 }, 00:37:08.475 { 00:37:08.475 "name": "BaseBdev4", 00:37:08.475 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:08.475 "is_configured": true, 00:37:08.475 "data_offset": 2048, 00:37:08.475 "data_size": 63488 00:37:08.475 } 00:37:08.475 ] 00:37:08.475 }' 00:37:08.475 09:03:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:08.475 09:03:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.041 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:09.041 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:09.299 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:37:09.299 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:09.558 [2024-07-12 09:03:44.627807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:09.558 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:09.817 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:09.817 "name": "Existed_Raid", 00:37:09.817 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:09.817 "strip_size_kb": 64, 00:37:09.817 "state": "configuring", 00:37:09.817 "raid_level": "raid5f", 00:37:09.817 "superblock": true, 00:37:09.817 "num_base_bdevs": 4, 00:37:09.817 "num_base_bdevs_discovered": 3, 00:37:09.817 "num_base_bdevs_operational": 4, 00:37:09.817 "base_bdevs_list": [ 00:37:09.817 { 00:37:09.817 "name": null, 00:37:09.817 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:09.817 "is_configured": false, 00:37:09.817 "data_offset": 2048, 00:37:09.817 "data_size": 63488 00:37:09.817 }, 00:37:09.817 { 00:37:09.817 "name": "BaseBdev2", 00:37:09.817 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:09.817 "is_configured": true, 00:37:09.817 "data_offset": 2048, 00:37:09.817 "data_size": 63488 00:37:09.817 }, 00:37:09.817 { 00:37:09.817 "name": "BaseBdev3", 00:37:09.817 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:09.817 "is_configured": true, 00:37:09.817 "data_offset": 2048, 00:37:09.817 "data_size": 63488 00:37:09.817 }, 00:37:09.817 { 00:37:09.817 "name": "BaseBdev4", 00:37:09.817 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:09.817 "is_configured": true, 00:37:09.817 "data_offset": 2048, 00:37:09.817 "data_size": 63488 00:37:09.817 } 00:37:09.817 ] 00:37:09.817 }' 00:37:09.817 09:03:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:09.817 09:03:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.384 09:03:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.384 09:03:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:10.642 09:03:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:37:10.642 09:03:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.642 09:03:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:10.901 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf 00:37:11.159 [2024-07-12 09:03:46.238066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:11.159 [2024-07-12 09:03:46.238277] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:37:11.159 [2024-07-12 09:03:46.238291] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:11.159 NewBaseBdev 00:37:11.159 [2024-07-12 09:03:46.238388] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:11.159 [2024-07-12 09:03:46.243518] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:37:11.159 [2024-07-12 09:03:46.243542] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:37:11.159 [2024-07-12 09:03:46.243679] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:11.159 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:37:11.159 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:37:11.159 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:11.159 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:37:11.159 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:11.159 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:11.159 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:11.418 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:11.676 [ 00:37:11.676 { 00:37:11.676 "name": "NewBaseBdev", 00:37:11.676 "aliases": [ 00:37:11.676 "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf" 00:37:11.676 ], 00:37:11.676 "product_name": "Malloc disk", 00:37:11.676 "block_size": 512, 00:37:11.676 "num_blocks": 65536, 00:37:11.676 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:11.676 "assigned_rate_limits": { 00:37:11.676 "rw_ios_per_sec": 0, 00:37:11.676 "rw_mbytes_per_sec": 0, 00:37:11.676 "r_mbytes_per_sec": 0, 00:37:11.676 "w_mbytes_per_sec": 0 00:37:11.676 }, 00:37:11.676 "claimed": true, 00:37:11.676 "claim_type": "exclusive_write", 00:37:11.676 "zoned": false, 00:37:11.676 "supported_io_types": { 00:37:11.676 "read": true, 00:37:11.676 "write": true, 00:37:11.676 "unmap": true, 00:37:11.676 "flush": true, 00:37:11.676 "reset": true, 00:37:11.676 "nvme_admin": false, 00:37:11.676 "nvme_io": false, 00:37:11.676 "nvme_io_md": false, 00:37:11.676 "write_zeroes": true, 00:37:11.676 "zcopy": true, 00:37:11.676 "get_zone_info": false, 00:37:11.676 "zone_management": false, 00:37:11.676 "zone_append": false, 00:37:11.676 "compare": false, 00:37:11.676 "compare_and_write": false, 00:37:11.676 "abort": true, 00:37:11.676 "seek_hole": false, 00:37:11.676 "seek_data": false, 00:37:11.676 "copy": true, 00:37:11.676 "nvme_iov_md": false 00:37:11.676 }, 00:37:11.676 "memory_domains": [ 00:37:11.676 { 00:37:11.676 "dma_device_id": "system", 00:37:11.676 "dma_device_type": 1 00:37:11.676 }, 00:37:11.676 { 00:37:11.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:11.676 "dma_device_type": 2 00:37:11.676 } 00:37:11.676 ], 00:37:11.676 "driver_specific": {} 00:37:11.676 } 00:37:11.676 ] 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:11.676 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:11.934 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:11.934 "name": "Existed_Raid", 00:37:11.934 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:11.934 "strip_size_kb": 64, 00:37:11.934 "state": "online", 00:37:11.934 "raid_level": "raid5f", 00:37:11.934 "superblock": true, 00:37:11.934 "num_base_bdevs": 4, 00:37:11.934 "num_base_bdevs_discovered": 4, 00:37:11.934 "num_base_bdevs_operational": 4, 00:37:11.934 "base_bdevs_list": [ 00:37:11.934 { 00:37:11.934 "name": "NewBaseBdev", 00:37:11.934 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:11.934 "is_configured": true, 00:37:11.934 "data_offset": 2048, 00:37:11.934 "data_size": 63488 00:37:11.934 }, 00:37:11.934 { 00:37:11.934 "name": "BaseBdev2", 00:37:11.934 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:11.934 "is_configured": true, 00:37:11.934 "data_offset": 2048, 00:37:11.934 "data_size": 63488 00:37:11.934 }, 00:37:11.934 { 00:37:11.934 "name": "BaseBdev3", 00:37:11.934 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:11.934 "is_configured": true, 00:37:11.934 "data_offset": 2048, 00:37:11.934 "data_size": 63488 00:37:11.934 }, 00:37:11.934 { 00:37:11.934 "name": "BaseBdev4", 00:37:11.934 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:11.934 "is_configured": true, 00:37:11.934 "data_offset": 2048, 00:37:11.934 "data_size": 63488 00:37:11.934 } 00:37:11.934 ] 00:37:11.934 }' 00:37:11.934 09:03:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:11.934 09:03:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:12.499 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:12.756 [2024-07-12 09:03:47.850107] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:12.756 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:12.756 "name": "Existed_Raid", 00:37:12.756 "aliases": [ 00:37:12.756 "fa14fe5b-6c1d-449f-9557-36877bef8d66" 00:37:12.756 ], 00:37:12.756 "product_name": "Raid Volume", 00:37:12.756 "block_size": 512, 00:37:12.756 "num_blocks": 190464, 00:37:12.756 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:12.756 "assigned_rate_limits": { 00:37:12.756 "rw_ios_per_sec": 0, 00:37:12.756 "rw_mbytes_per_sec": 0, 00:37:12.756 "r_mbytes_per_sec": 0, 00:37:12.756 "w_mbytes_per_sec": 0 00:37:12.756 }, 00:37:12.756 "claimed": false, 00:37:12.756 "zoned": false, 00:37:12.756 "supported_io_types": { 00:37:12.756 "read": true, 00:37:12.756 "write": true, 00:37:12.756 "unmap": false, 00:37:12.756 "flush": false, 00:37:12.756 "reset": true, 00:37:12.756 "nvme_admin": false, 00:37:12.756 "nvme_io": false, 00:37:12.756 "nvme_io_md": false, 00:37:12.756 "write_zeroes": true, 00:37:12.756 "zcopy": false, 00:37:12.756 "get_zone_info": false, 00:37:12.756 "zone_management": false, 00:37:12.756 "zone_append": false, 00:37:12.756 "compare": false, 00:37:12.756 "compare_and_write": false, 00:37:12.756 "abort": false, 00:37:12.756 "seek_hole": false, 00:37:12.756 "seek_data": false, 00:37:12.756 "copy": false, 00:37:12.756 "nvme_iov_md": false 00:37:12.756 }, 00:37:12.756 "driver_specific": { 00:37:12.756 "raid": { 00:37:12.756 "uuid": "fa14fe5b-6c1d-449f-9557-36877bef8d66", 00:37:12.756 "strip_size_kb": 64, 00:37:12.756 "state": "online", 00:37:12.756 "raid_level": "raid5f", 00:37:12.756 "superblock": true, 00:37:12.756 "num_base_bdevs": 4, 00:37:12.756 "num_base_bdevs_discovered": 4, 00:37:12.756 "num_base_bdevs_operational": 4, 00:37:12.756 "base_bdevs_list": [ 00:37:12.756 { 00:37:12.756 "name": "NewBaseBdev", 00:37:12.756 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:12.756 "is_configured": true, 00:37:12.756 "data_offset": 2048, 00:37:12.756 "data_size": 63488 00:37:12.756 }, 00:37:12.756 { 00:37:12.756 "name": "BaseBdev2", 00:37:12.756 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:12.756 "is_configured": true, 00:37:12.756 "data_offset": 2048, 00:37:12.756 "data_size": 63488 00:37:12.756 }, 00:37:12.756 { 00:37:12.756 "name": "BaseBdev3", 00:37:12.756 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:12.756 "is_configured": true, 00:37:12.756 "data_offset": 2048, 00:37:12.756 "data_size": 63488 00:37:12.756 }, 00:37:12.756 { 00:37:12.756 "name": "BaseBdev4", 00:37:12.756 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:12.756 "is_configured": true, 00:37:12.756 "data_offset": 2048, 00:37:12.756 "data_size": 63488 00:37:12.756 } 00:37:12.756 ] 00:37:12.756 } 00:37:12.756 } 00:37:12.756 }' 00:37:12.756 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:12.756 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:37:12.756 BaseBdev2 00:37:12.756 BaseBdev3 00:37:12.756 BaseBdev4' 00:37:12.756 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:12.756 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:37:12.756 09:03:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:13.013 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:13.013 "name": "NewBaseBdev", 00:37:13.013 "aliases": [ 00:37:13.013 "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf" 00:37:13.013 ], 00:37:13.013 "product_name": "Malloc disk", 00:37:13.013 "block_size": 512, 00:37:13.013 "num_blocks": 65536, 00:37:13.013 "uuid": "eeb7630e-27a2-4fcd-8f6e-d3f5aab4a0cf", 00:37:13.013 "assigned_rate_limits": { 00:37:13.013 "rw_ios_per_sec": 0, 00:37:13.013 "rw_mbytes_per_sec": 0, 00:37:13.013 "r_mbytes_per_sec": 0, 00:37:13.013 "w_mbytes_per_sec": 0 00:37:13.013 }, 00:37:13.013 "claimed": true, 00:37:13.013 "claim_type": "exclusive_write", 00:37:13.013 "zoned": false, 00:37:13.013 "supported_io_types": { 00:37:13.013 "read": true, 00:37:13.013 "write": true, 00:37:13.013 "unmap": true, 00:37:13.013 "flush": true, 00:37:13.013 "reset": true, 00:37:13.013 "nvme_admin": false, 00:37:13.013 "nvme_io": false, 00:37:13.013 "nvme_io_md": false, 00:37:13.013 "write_zeroes": true, 00:37:13.013 "zcopy": true, 00:37:13.013 "get_zone_info": false, 00:37:13.013 "zone_management": false, 00:37:13.013 "zone_append": false, 00:37:13.013 "compare": false, 00:37:13.013 "compare_and_write": false, 00:37:13.013 "abort": true, 00:37:13.013 "seek_hole": false, 00:37:13.013 "seek_data": false, 00:37:13.013 "copy": true, 00:37:13.013 "nvme_iov_md": false 00:37:13.013 }, 00:37:13.013 "memory_domains": [ 00:37:13.013 { 00:37:13.013 "dma_device_id": "system", 00:37:13.013 "dma_device_type": 1 00:37:13.013 }, 00:37:13.013 { 00:37:13.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.013 "dma_device_type": 2 00:37:13.013 } 00:37:13.013 ], 00:37:13.013 "driver_specific": {} 00:37:13.013 }' 00:37:13.013 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:13.271 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:13.271 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:13.271 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:13.271 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:13.271 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:13.271 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:13.271 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:13.529 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:13.529 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:13.529 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:13.529 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:13.529 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:13.529 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:13.529 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:13.787 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:13.787 "name": "BaseBdev2", 00:37:13.787 "aliases": [ 00:37:13.787 "63b911f0-a63f-45a3-8f9d-7f0240145af5" 00:37:13.787 ], 00:37:13.787 "product_name": "Malloc disk", 00:37:13.787 "block_size": 512, 00:37:13.787 "num_blocks": 65536, 00:37:13.787 "uuid": "63b911f0-a63f-45a3-8f9d-7f0240145af5", 00:37:13.787 "assigned_rate_limits": { 00:37:13.787 "rw_ios_per_sec": 0, 00:37:13.787 "rw_mbytes_per_sec": 0, 00:37:13.787 "r_mbytes_per_sec": 0, 00:37:13.787 "w_mbytes_per_sec": 0 00:37:13.787 }, 00:37:13.787 "claimed": true, 00:37:13.787 "claim_type": "exclusive_write", 00:37:13.787 "zoned": false, 00:37:13.787 "supported_io_types": { 00:37:13.787 "read": true, 00:37:13.787 "write": true, 00:37:13.787 "unmap": true, 00:37:13.787 "flush": true, 00:37:13.787 "reset": true, 00:37:13.787 "nvme_admin": false, 00:37:13.787 "nvme_io": false, 00:37:13.787 "nvme_io_md": false, 00:37:13.787 "write_zeroes": true, 00:37:13.787 "zcopy": true, 00:37:13.787 "get_zone_info": false, 00:37:13.787 "zone_management": false, 00:37:13.787 "zone_append": false, 00:37:13.787 "compare": false, 00:37:13.787 "compare_and_write": false, 00:37:13.787 "abort": true, 00:37:13.787 "seek_hole": false, 00:37:13.787 "seek_data": false, 00:37:13.787 "copy": true, 00:37:13.787 "nvme_iov_md": false 00:37:13.787 }, 00:37:13.787 "memory_domains": [ 00:37:13.787 { 00:37:13.787 "dma_device_id": "system", 00:37:13.787 "dma_device_type": 1 00:37:13.787 }, 00:37:13.787 { 00:37:13.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.787 "dma_device_type": 2 00:37:13.787 } 00:37:13.787 ], 00:37:13.787 "driver_specific": {} 00:37:13.787 }' 00:37:13.787 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:13.787 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:14.044 09:03:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:14.044 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:14.044 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:14.044 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:14.044 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:14.044 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:14.302 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:14.302 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:14.302 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:14.302 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:14.302 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:14.302 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:37:14.302 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:14.559 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:14.559 "name": "BaseBdev3", 00:37:14.559 "aliases": [ 00:37:14.559 "68ad1206-8434-40e1-80d4-87af9c1aa350" 00:37:14.559 ], 00:37:14.559 "product_name": "Malloc disk", 00:37:14.559 "block_size": 512, 00:37:14.559 "num_blocks": 65536, 00:37:14.559 "uuid": "68ad1206-8434-40e1-80d4-87af9c1aa350", 00:37:14.559 "assigned_rate_limits": { 00:37:14.559 "rw_ios_per_sec": 0, 00:37:14.559 "rw_mbytes_per_sec": 0, 00:37:14.559 "r_mbytes_per_sec": 0, 00:37:14.559 "w_mbytes_per_sec": 0 00:37:14.559 }, 00:37:14.559 "claimed": true, 00:37:14.559 "claim_type": "exclusive_write", 00:37:14.559 "zoned": false, 00:37:14.559 "supported_io_types": { 00:37:14.559 "read": true, 00:37:14.559 "write": true, 00:37:14.559 "unmap": true, 00:37:14.559 "flush": true, 00:37:14.559 "reset": true, 00:37:14.559 "nvme_admin": false, 00:37:14.559 "nvme_io": false, 00:37:14.559 "nvme_io_md": false, 00:37:14.559 "write_zeroes": true, 00:37:14.559 "zcopy": true, 00:37:14.559 "get_zone_info": false, 00:37:14.559 "zone_management": false, 00:37:14.559 "zone_append": false, 00:37:14.559 "compare": false, 00:37:14.559 "compare_and_write": false, 00:37:14.559 "abort": true, 00:37:14.559 "seek_hole": false, 00:37:14.559 "seek_data": false, 00:37:14.559 "copy": true, 00:37:14.559 "nvme_iov_md": false 00:37:14.559 }, 00:37:14.559 "memory_domains": [ 00:37:14.559 { 00:37:14.559 "dma_device_id": "system", 00:37:14.559 "dma_device_type": 1 00:37:14.559 }, 00:37:14.559 { 00:37:14.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:14.559 "dma_device_type": 2 00:37:14.559 } 00:37:14.559 ], 00:37:14.559 "driver_specific": {} 00:37:14.559 }' 00:37:14.559 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:14.559 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:14.559 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:14.559 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:14.817 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:14.818 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:14.818 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:14.818 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:14.818 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:14.818 09:03:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:14.818 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:15.076 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:15.076 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:15.076 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:37:15.076 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:15.334 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:15.334 "name": "BaseBdev4", 00:37:15.334 "aliases": [ 00:37:15.334 "51434cf1-058e-4dc2-938d-f171213a5c3f" 00:37:15.334 ], 00:37:15.334 "product_name": "Malloc disk", 00:37:15.334 "block_size": 512, 00:37:15.334 "num_blocks": 65536, 00:37:15.334 "uuid": "51434cf1-058e-4dc2-938d-f171213a5c3f", 00:37:15.334 "assigned_rate_limits": { 00:37:15.334 "rw_ios_per_sec": 0, 00:37:15.334 "rw_mbytes_per_sec": 0, 00:37:15.334 "r_mbytes_per_sec": 0, 00:37:15.334 "w_mbytes_per_sec": 0 00:37:15.334 }, 00:37:15.334 "claimed": true, 00:37:15.334 "claim_type": "exclusive_write", 00:37:15.334 "zoned": false, 00:37:15.334 "supported_io_types": { 00:37:15.334 "read": true, 00:37:15.334 "write": true, 00:37:15.334 "unmap": true, 00:37:15.334 "flush": true, 00:37:15.334 "reset": true, 00:37:15.334 "nvme_admin": false, 00:37:15.334 "nvme_io": false, 00:37:15.334 "nvme_io_md": false, 00:37:15.334 "write_zeroes": true, 00:37:15.334 "zcopy": true, 00:37:15.334 "get_zone_info": false, 00:37:15.335 "zone_management": false, 00:37:15.335 "zone_append": false, 00:37:15.335 "compare": false, 00:37:15.335 "compare_and_write": false, 00:37:15.335 "abort": true, 00:37:15.335 "seek_hole": false, 00:37:15.335 "seek_data": false, 00:37:15.335 "copy": true, 00:37:15.335 "nvme_iov_md": false 00:37:15.335 }, 00:37:15.335 "memory_domains": [ 00:37:15.335 { 00:37:15.335 "dma_device_id": "system", 00:37:15.335 "dma_device_type": 1 00:37:15.335 }, 00:37:15.335 { 00:37:15.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:15.335 "dma_device_type": 2 00:37:15.335 } 00:37:15.335 ], 00:37:15.335 "driver_specific": {} 00:37:15.335 }' 00:37:15.335 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.335 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:15.335 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:15.335 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.335 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:15.593 09:03:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:15.851 [2024-07-12 09:03:51.010535] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:15.851 [2024-07-12 09:03:51.010570] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:15.851 [2024-07-12 09:03:51.010647] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:15.851 [2024-07-12 09:03:51.010926] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:15.851 [2024-07-12 09:03:51.010948] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:37:15.851 09:03:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 158132 00:37:15.851 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 158132 ']' 00:37:15.851 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 158132 00:37:15.851 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:37:15.851 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.851 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158132 00:37:16.110 killing process with pid 158132 00:37:16.110 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:16.110 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:16.110 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158132' 00:37:16.110 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 158132 00:37:16.110 09:03:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 158132 00:37:16.110 [2024-07-12 09:03:51.047277] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:16.110 [2024-07-12 09:03:51.299677] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:17.044 ************************************ 00:37:17.044 END TEST raid5f_state_function_test_sb 00:37:17.044 ************************************ 00:37:17.044 09:03:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:37:17.044 00:37:17.044 real 0m33.774s 00:37:17.044 user 1m3.925s 00:37:17.044 sys 0m3.532s 00:37:17.044 09:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:17.044 09:03:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:17.303 09:03:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:17.303 09:03:52 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:37:17.303 09:03:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:37:17.303 09:03:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:17.303 09:03:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:17.303 ************************************ 00:37:17.303 START TEST raid5f_superblock_test 00:37:17.303 ************************************ 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 4 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=159278 00:37:17.303 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 159278 /var/tmp/spdk-raid.sock 00:37:17.304 09:03:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:17.304 09:03:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 159278 ']' 00:37:17.304 09:03:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:17.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:17.304 09:03:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:17.304 09:03:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:17.304 09:03:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:17.304 09:03:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.304 [2024-07-12 09:03:52.349384] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:37:17.304 [2024-07-12 09:03:52.349594] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159278 ] 00:37:17.572 [2024-07-12 09:03:52.523487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.572 [2024-07-12 09:03:52.744126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.860 [2024-07-12 09:03:52.906455] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:18.123 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:37:18.381 malloc1 00:37:18.382 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:18.641 [2024-07-12 09:03:53.646627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:18.641 [2024-07-12 09:03:53.646866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:18.641 [2024-07-12 09:03:53.647003] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:37:18.641 [2024-07-12 09:03:53.647112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:18.641 [2024-07-12 09:03:53.649115] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:18.641 [2024-07-12 09:03:53.649269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:18.641 pt1 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:18.641 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:37:18.899 malloc2 00:37:18.899 09:03:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:19.158 [2024-07-12 09:03:54.155278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:19.158 [2024-07-12 09:03:54.155505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.158 [2024-07-12 09:03:54.155572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:37:19.158 [2024-07-12 09:03:54.155824] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.158 [2024-07-12 09:03:54.158205] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.158 [2024-07-12 09:03:54.158372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:19.158 pt2 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:19.158 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:37:19.417 malloc3 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:19.417 [2024-07-12 09:03:54.541116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:19.417 [2024-07-12 09:03:54.541306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.417 [2024-07-12 09:03:54.541370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:37:19.417 [2024-07-12 09:03:54.541624] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.417 [2024-07-12 09:03:54.543819] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.417 [2024-07-12 09:03:54.543983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:19.417 pt3 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:19.417 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:37:19.675 malloc4 00:37:19.675 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:19.933 [2024-07-12 09:03:54.982160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:19.933 [2024-07-12 09:03:54.982355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.933 [2024-07-12 09:03:54.982419] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:19.933 [2024-07-12 09:03:54.982701] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.933 [2024-07-12 09:03:54.985097] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.933 [2024-07-12 09:03:54.985258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:19.933 pt4 00:37:19.933 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:19.933 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:19.933 09:03:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:37:20.192 [2024-07-12 09:03:55.218489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:20.192 [2024-07-12 09:03:55.220615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:20.192 [2024-07-12 09:03:55.220853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:20.192 [2024-07-12 09:03:55.221023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:20.192 [2024-07-12 09:03:55.221371] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:37:20.192 [2024-07-12 09:03:55.221511] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:20.192 [2024-07-12 09:03:55.221698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:37:20.192 [2024-07-12 09:03:55.227645] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:37:20.192 [2024-07-12 09:03:55.227778] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:37:20.192 [2024-07-12 09:03:55.228026] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.192 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.451 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:20.451 "name": "raid_bdev1", 00:37:20.451 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:20.451 "strip_size_kb": 64, 00:37:20.451 "state": "online", 00:37:20.451 "raid_level": "raid5f", 00:37:20.451 "superblock": true, 00:37:20.451 "num_base_bdevs": 4, 00:37:20.451 "num_base_bdevs_discovered": 4, 00:37:20.451 "num_base_bdevs_operational": 4, 00:37:20.451 "base_bdevs_list": [ 00:37:20.451 { 00:37:20.451 "name": "pt1", 00:37:20.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:20.451 "is_configured": true, 00:37:20.451 "data_offset": 2048, 00:37:20.451 "data_size": 63488 00:37:20.451 }, 00:37:20.451 { 00:37:20.451 "name": "pt2", 00:37:20.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:20.451 "is_configured": true, 00:37:20.451 "data_offset": 2048, 00:37:20.451 "data_size": 63488 00:37:20.451 }, 00:37:20.451 { 00:37:20.451 "name": "pt3", 00:37:20.451 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:20.451 "is_configured": true, 00:37:20.451 "data_offset": 2048, 00:37:20.451 "data_size": 63488 00:37:20.451 }, 00:37:20.451 { 00:37:20.451 "name": "pt4", 00:37:20.451 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:20.451 "is_configured": true, 00:37:20.451 "data_offset": 2048, 00:37:20.451 "data_size": 63488 00:37:20.451 } 00:37:20.451 ] 00:37:20.451 }' 00:37:20.451 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:20.451 09:03:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:21.018 09:03:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:21.277 [2024-07-12 09:03:56.243217] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:21.277 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:21.277 "name": "raid_bdev1", 00:37:21.277 "aliases": [ 00:37:21.277 "511cb381-d556-4949-96c1-170fcaec6e67" 00:37:21.277 ], 00:37:21.277 "product_name": "Raid Volume", 00:37:21.277 "block_size": 512, 00:37:21.277 "num_blocks": 190464, 00:37:21.277 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:21.277 "assigned_rate_limits": { 00:37:21.277 "rw_ios_per_sec": 0, 00:37:21.277 "rw_mbytes_per_sec": 0, 00:37:21.277 "r_mbytes_per_sec": 0, 00:37:21.277 "w_mbytes_per_sec": 0 00:37:21.277 }, 00:37:21.277 "claimed": false, 00:37:21.277 "zoned": false, 00:37:21.277 "supported_io_types": { 00:37:21.277 "read": true, 00:37:21.277 "write": true, 00:37:21.277 "unmap": false, 00:37:21.277 "flush": false, 00:37:21.277 "reset": true, 00:37:21.277 "nvme_admin": false, 00:37:21.277 "nvme_io": false, 00:37:21.277 "nvme_io_md": false, 00:37:21.277 "write_zeroes": true, 00:37:21.277 "zcopy": false, 00:37:21.277 "get_zone_info": false, 00:37:21.277 "zone_management": false, 00:37:21.277 "zone_append": false, 00:37:21.277 "compare": false, 00:37:21.277 "compare_and_write": false, 00:37:21.277 "abort": false, 00:37:21.277 "seek_hole": false, 00:37:21.277 "seek_data": false, 00:37:21.277 "copy": false, 00:37:21.277 "nvme_iov_md": false 00:37:21.277 }, 00:37:21.277 "driver_specific": { 00:37:21.277 "raid": { 00:37:21.277 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:21.277 "strip_size_kb": 64, 00:37:21.277 "state": "online", 00:37:21.277 "raid_level": "raid5f", 00:37:21.277 "superblock": true, 00:37:21.277 "num_base_bdevs": 4, 00:37:21.277 "num_base_bdevs_discovered": 4, 00:37:21.277 "num_base_bdevs_operational": 4, 00:37:21.277 "base_bdevs_list": [ 00:37:21.277 { 00:37:21.277 "name": "pt1", 00:37:21.277 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:21.277 "is_configured": true, 00:37:21.277 "data_offset": 2048, 00:37:21.277 "data_size": 63488 00:37:21.277 }, 00:37:21.277 { 00:37:21.277 "name": "pt2", 00:37:21.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:21.277 "is_configured": true, 00:37:21.277 "data_offset": 2048, 00:37:21.277 "data_size": 63488 00:37:21.277 }, 00:37:21.277 { 00:37:21.277 "name": "pt3", 00:37:21.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:21.277 "is_configured": true, 00:37:21.277 "data_offset": 2048, 00:37:21.277 "data_size": 63488 00:37:21.277 }, 00:37:21.277 { 00:37:21.277 "name": "pt4", 00:37:21.277 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:21.277 "is_configured": true, 00:37:21.277 "data_offset": 2048, 00:37:21.277 "data_size": 63488 00:37:21.277 } 00:37:21.277 ] 00:37:21.277 } 00:37:21.277 } 00:37:21.277 }' 00:37:21.277 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:21.277 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:21.277 pt2 00:37:21.277 pt3 00:37:21.277 pt4' 00:37:21.277 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:21.277 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:21.277 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:21.536 "name": "pt1", 00:37:21.536 "aliases": [ 00:37:21.536 "00000000-0000-0000-0000-000000000001" 00:37:21.536 ], 00:37:21.536 "product_name": "passthru", 00:37:21.536 "block_size": 512, 00:37:21.536 "num_blocks": 65536, 00:37:21.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:21.536 "assigned_rate_limits": { 00:37:21.536 "rw_ios_per_sec": 0, 00:37:21.536 "rw_mbytes_per_sec": 0, 00:37:21.536 "r_mbytes_per_sec": 0, 00:37:21.536 "w_mbytes_per_sec": 0 00:37:21.536 }, 00:37:21.536 "claimed": true, 00:37:21.536 "claim_type": "exclusive_write", 00:37:21.536 "zoned": false, 00:37:21.536 "supported_io_types": { 00:37:21.536 "read": true, 00:37:21.536 "write": true, 00:37:21.536 "unmap": true, 00:37:21.536 "flush": true, 00:37:21.536 "reset": true, 00:37:21.536 "nvme_admin": false, 00:37:21.536 "nvme_io": false, 00:37:21.536 "nvme_io_md": false, 00:37:21.536 "write_zeroes": true, 00:37:21.536 "zcopy": true, 00:37:21.536 "get_zone_info": false, 00:37:21.536 "zone_management": false, 00:37:21.536 "zone_append": false, 00:37:21.536 "compare": false, 00:37:21.536 "compare_and_write": false, 00:37:21.536 "abort": true, 00:37:21.536 "seek_hole": false, 00:37:21.536 "seek_data": false, 00:37:21.536 "copy": true, 00:37:21.536 "nvme_iov_md": false 00:37:21.536 }, 00:37:21.536 "memory_domains": [ 00:37:21.536 { 00:37:21.536 "dma_device_id": "system", 00:37:21.536 "dma_device_type": 1 00:37:21.536 }, 00:37:21.536 { 00:37:21.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:21.536 "dma_device_type": 2 00:37:21.536 } 00:37:21.536 ], 00:37:21.536 "driver_specific": { 00:37:21.536 "passthru": { 00:37:21.536 "name": "pt1", 00:37:21.536 "base_bdev_name": "malloc1" 00:37:21.536 } 00:37:21.536 } 00:37:21.536 }' 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:21.536 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:21.795 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:21.795 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:21.795 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:21.795 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:21.795 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:21.795 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:21.795 09:03:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:22.053 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:22.053 "name": "pt2", 00:37:22.053 "aliases": [ 00:37:22.053 "00000000-0000-0000-0000-000000000002" 00:37:22.053 ], 00:37:22.053 "product_name": "passthru", 00:37:22.053 "block_size": 512, 00:37:22.053 "num_blocks": 65536, 00:37:22.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:22.053 "assigned_rate_limits": { 00:37:22.053 "rw_ios_per_sec": 0, 00:37:22.053 "rw_mbytes_per_sec": 0, 00:37:22.053 "r_mbytes_per_sec": 0, 00:37:22.053 "w_mbytes_per_sec": 0 00:37:22.053 }, 00:37:22.053 "claimed": true, 00:37:22.053 "claim_type": "exclusive_write", 00:37:22.053 "zoned": false, 00:37:22.053 "supported_io_types": { 00:37:22.053 "read": true, 00:37:22.053 "write": true, 00:37:22.053 "unmap": true, 00:37:22.053 "flush": true, 00:37:22.053 "reset": true, 00:37:22.053 "nvme_admin": false, 00:37:22.053 "nvme_io": false, 00:37:22.053 "nvme_io_md": false, 00:37:22.053 "write_zeroes": true, 00:37:22.053 "zcopy": true, 00:37:22.053 "get_zone_info": false, 00:37:22.053 "zone_management": false, 00:37:22.053 "zone_append": false, 00:37:22.053 "compare": false, 00:37:22.053 "compare_and_write": false, 00:37:22.053 "abort": true, 00:37:22.053 "seek_hole": false, 00:37:22.053 "seek_data": false, 00:37:22.053 "copy": true, 00:37:22.053 "nvme_iov_md": false 00:37:22.053 }, 00:37:22.053 "memory_domains": [ 00:37:22.053 { 00:37:22.053 "dma_device_id": "system", 00:37:22.053 "dma_device_type": 1 00:37:22.053 }, 00:37:22.053 { 00:37:22.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:22.053 "dma_device_type": 2 00:37:22.053 } 00:37:22.053 ], 00:37:22.053 "driver_specific": { 00:37:22.053 "passthru": { 00:37:22.053 "name": "pt2", 00:37:22.053 "base_bdev_name": "malloc2" 00:37:22.053 } 00:37:22.053 } 00:37:22.053 }' 00:37:22.053 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:22.053 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:22.053 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:22.053 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:22.311 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:22.312 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:22.312 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:22.312 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:22.312 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:22.312 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:22.312 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:22.571 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:22.571 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:22.571 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:37:22.571 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:22.571 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:22.571 "name": "pt3", 00:37:22.571 "aliases": [ 00:37:22.571 "00000000-0000-0000-0000-000000000003" 00:37:22.571 ], 00:37:22.571 "product_name": "passthru", 00:37:22.571 "block_size": 512, 00:37:22.571 "num_blocks": 65536, 00:37:22.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:22.571 "assigned_rate_limits": { 00:37:22.571 "rw_ios_per_sec": 0, 00:37:22.571 "rw_mbytes_per_sec": 0, 00:37:22.571 "r_mbytes_per_sec": 0, 00:37:22.571 "w_mbytes_per_sec": 0 00:37:22.571 }, 00:37:22.571 "claimed": true, 00:37:22.571 "claim_type": "exclusive_write", 00:37:22.571 "zoned": false, 00:37:22.571 "supported_io_types": { 00:37:22.571 "read": true, 00:37:22.571 "write": true, 00:37:22.571 "unmap": true, 00:37:22.571 "flush": true, 00:37:22.571 "reset": true, 00:37:22.571 "nvme_admin": false, 00:37:22.571 "nvme_io": false, 00:37:22.571 "nvme_io_md": false, 00:37:22.571 "write_zeroes": true, 00:37:22.571 "zcopy": true, 00:37:22.571 "get_zone_info": false, 00:37:22.571 "zone_management": false, 00:37:22.571 "zone_append": false, 00:37:22.571 "compare": false, 00:37:22.571 "compare_and_write": false, 00:37:22.571 "abort": true, 00:37:22.571 "seek_hole": false, 00:37:22.571 "seek_data": false, 00:37:22.571 "copy": true, 00:37:22.571 "nvme_iov_md": false 00:37:22.571 }, 00:37:22.571 "memory_domains": [ 00:37:22.571 { 00:37:22.571 "dma_device_id": "system", 00:37:22.571 "dma_device_type": 1 00:37:22.571 }, 00:37:22.571 { 00:37:22.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:22.571 "dma_device_type": 2 00:37:22.571 } 00:37:22.571 ], 00:37:22.571 "driver_specific": { 00:37:22.571 "passthru": { 00:37:22.571 "name": "pt3", 00:37:22.571 "base_bdev_name": "malloc3" 00:37:22.571 } 00:37:22.571 } 00:37:22.571 }' 00:37:22.571 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:22.830 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:22.830 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:22.830 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:22.830 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:22.830 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:22.830 09:03:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:37:23.088 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:23.357 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:23.357 "name": "pt4", 00:37:23.357 "aliases": [ 00:37:23.357 "00000000-0000-0000-0000-000000000004" 00:37:23.357 ], 00:37:23.357 "product_name": "passthru", 00:37:23.357 "block_size": 512, 00:37:23.357 "num_blocks": 65536, 00:37:23.357 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:23.357 "assigned_rate_limits": { 00:37:23.357 "rw_ios_per_sec": 0, 00:37:23.357 "rw_mbytes_per_sec": 0, 00:37:23.357 "r_mbytes_per_sec": 0, 00:37:23.357 "w_mbytes_per_sec": 0 00:37:23.357 }, 00:37:23.357 "claimed": true, 00:37:23.357 "claim_type": "exclusive_write", 00:37:23.357 "zoned": false, 00:37:23.357 "supported_io_types": { 00:37:23.357 "read": true, 00:37:23.357 "write": true, 00:37:23.357 "unmap": true, 00:37:23.357 "flush": true, 00:37:23.357 "reset": true, 00:37:23.357 "nvme_admin": false, 00:37:23.357 "nvme_io": false, 00:37:23.357 "nvme_io_md": false, 00:37:23.357 "write_zeroes": true, 00:37:23.357 "zcopy": true, 00:37:23.357 "get_zone_info": false, 00:37:23.357 "zone_management": false, 00:37:23.357 "zone_append": false, 00:37:23.357 "compare": false, 00:37:23.357 "compare_and_write": false, 00:37:23.357 "abort": true, 00:37:23.357 "seek_hole": false, 00:37:23.357 "seek_data": false, 00:37:23.357 "copy": true, 00:37:23.357 "nvme_iov_md": false 00:37:23.357 }, 00:37:23.357 "memory_domains": [ 00:37:23.357 { 00:37:23.357 "dma_device_id": "system", 00:37:23.357 "dma_device_type": 1 00:37:23.357 }, 00:37:23.357 { 00:37:23.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:23.357 "dma_device_type": 2 00:37:23.357 } 00:37:23.357 ], 00:37:23.357 "driver_specific": { 00:37:23.357 "passthru": { 00:37:23.357 "name": "pt4", 00:37:23.357 "base_bdev_name": "malloc4" 00:37:23.357 } 00:37:23.357 } 00:37:23.357 }' 00:37:23.357 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:23.357 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:23.357 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:23.357 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:23.622 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:23.622 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:23.622 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:23.622 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:23.622 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:23.622 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:23.622 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:23.880 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:23.880 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:23.880 09:03:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:37:24.138 [2024-07-12 09:03:59.111874] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:24.138 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=511cb381-d556-4949-96c1-170fcaec6e67 00:37:24.138 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 511cb381-d556-4949-96c1-170fcaec6e67 ']' 00:37:24.138 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:24.138 [2024-07-12 09:03:59.319711] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:24.138 [2024-07-12 09:03:59.319739] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:24.138 [2024-07-12 09:03:59.319817] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:24.138 [2024-07-12 09:03:59.319906] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:24.138 [2024-07-12 09:03:59.319918] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:37:24.396 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.396 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:37:24.396 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:37:24.396 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:37:24.396 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:24.396 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:24.654 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:24.654 09:03:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:24.912 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:24.912 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:37:25.171 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:25.171 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:37:25.429 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:25.429 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:37:25.688 [2024-07-12 09:04:00.811971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:25.688 [2024-07-12 09:04:00.813987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:25.688 [2024-07-12 09:04:00.814061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:37:25.688 [2024-07-12 09:04:00.814135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:37:25.688 [2024-07-12 09:04:00.814189] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:25.688 [2024-07-12 09:04:00.814292] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:25.688 [2024-07-12 09:04:00.814330] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:37:25.688 [2024-07-12 09:04:00.814365] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:37:25.688 [2024-07-12 09:04:00.814388] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:25.688 [2024-07-12 09:04:00.814398] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:37:25.688 request: 00:37:25.688 { 00:37:25.688 "name": "raid_bdev1", 00:37:25.688 "raid_level": "raid5f", 00:37:25.688 "base_bdevs": [ 00:37:25.688 "malloc1", 00:37:25.688 "malloc2", 00:37:25.688 "malloc3", 00:37:25.688 "malloc4" 00:37:25.688 ], 00:37:25.688 "strip_size_kb": 64, 00:37:25.688 "superblock": false, 00:37:25.688 "method": "bdev_raid_create", 00:37:25.688 "req_id": 1 00:37:25.688 } 00:37:25.688 Got JSON-RPC error response 00:37:25.688 response: 00:37:25.688 { 00:37:25.688 "code": -17, 00:37:25.688 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:25.688 } 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.688 09:04:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:37:25.946 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:37:25.946 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:37:25.946 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:26.204 [2024-07-12 09:04:01.199974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:26.204 [2024-07-12 09:04:01.200063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.204 [2024-07-12 09:04:01.200100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:37:26.204 [2024-07-12 09:04:01.200147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.204 [2024-07-12 09:04:01.202560] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.204 [2024-07-12 09:04:01.202609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:26.204 [2024-07-12 09:04:01.202716] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:26.204 [2024-07-12 09:04:01.202805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:26.204 pt1 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.204 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.462 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:26.462 "name": "raid_bdev1", 00:37:26.462 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:26.462 "strip_size_kb": 64, 00:37:26.462 "state": "configuring", 00:37:26.462 "raid_level": "raid5f", 00:37:26.462 "superblock": true, 00:37:26.462 "num_base_bdevs": 4, 00:37:26.462 "num_base_bdevs_discovered": 1, 00:37:26.462 "num_base_bdevs_operational": 4, 00:37:26.462 "base_bdevs_list": [ 00:37:26.462 { 00:37:26.462 "name": "pt1", 00:37:26.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:26.462 "is_configured": true, 00:37:26.462 "data_offset": 2048, 00:37:26.462 "data_size": 63488 00:37:26.462 }, 00:37:26.462 { 00:37:26.462 "name": null, 00:37:26.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:26.462 "is_configured": false, 00:37:26.462 "data_offset": 2048, 00:37:26.462 "data_size": 63488 00:37:26.462 }, 00:37:26.462 { 00:37:26.462 "name": null, 00:37:26.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:26.462 "is_configured": false, 00:37:26.462 "data_offset": 2048, 00:37:26.462 "data_size": 63488 00:37:26.462 }, 00:37:26.462 { 00:37:26.462 "name": null, 00:37:26.462 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:26.462 "is_configured": false, 00:37:26.462 "data_offset": 2048, 00:37:26.462 "data_size": 63488 00:37:26.462 } 00:37:26.462 ] 00:37:26.462 }' 00:37:26.462 09:04:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:26.462 09:04:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.028 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:37:27.028 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:27.287 [2024-07-12 09:04:02.352141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:27.287 [2024-07-12 09:04:02.352210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:27.287 [2024-07-12 09:04:02.352271] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:27.287 [2024-07-12 09:04:02.352310] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:27.287 [2024-07-12 09:04:02.352724] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:27.287 [2024-07-12 09:04:02.352762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:27.287 [2024-07-12 09:04:02.352847] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:27.287 [2024-07-12 09:04:02.352873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:27.287 pt2 00:37:27.287 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:27.545 [2024-07-12 09:04:02.620216] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.545 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.804 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:27.804 "name": "raid_bdev1", 00:37:27.804 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:27.804 "strip_size_kb": 64, 00:37:27.804 "state": "configuring", 00:37:27.804 "raid_level": "raid5f", 00:37:27.804 "superblock": true, 00:37:27.804 "num_base_bdevs": 4, 00:37:27.804 "num_base_bdevs_discovered": 1, 00:37:27.804 "num_base_bdevs_operational": 4, 00:37:27.804 "base_bdevs_list": [ 00:37:27.804 { 00:37:27.804 "name": "pt1", 00:37:27.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:27.804 "is_configured": true, 00:37:27.804 "data_offset": 2048, 00:37:27.804 "data_size": 63488 00:37:27.804 }, 00:37:27.804 { 00:37:27.804 "name": null, 00:37:27.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:27.804 "is_configured": false, 00:37:27.804 "data_offset": 2048, 00:37:27.804 "data_size": 63488 00:37:27.804 }, 00:37:27.804 { 00:37:27.804 "name": null, 00:37:27.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:27.804 "is_configured": false, 00:37:27.804 "data_offset": 2048, 00:37:27.804 "data_size": 63488 00:37:27.804 }, 00:37:27.804 { 00:37:27.804 "name": null, 00:37:27.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:27.804 "is_configured": false, 00:37:27.804 "data_offset": 2048, 00:37:27.804 "data_size": 63488 00:37:27.804 } 00:37:27.804 ] 00:37:27.804 }' 00:37:27.804 09:04:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:27.804 09:04:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.369 09:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:37:28.369 09:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:28.369 09:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:28.627 [2024-07-12 09:04:03.768432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:28.627 [2024-07-12 09:04:03.768493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:28.627 [2024-07-12 09:04:03.768530] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:28.627 [2024-07-12 09:04:03.768571] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:28.627 [2024-07-12 09:04:03.768958] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:28.627 [2024-07-12 09:04:03.769006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:28.627 [2024-07-12 09:04:03.769083] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:28.627 [2024-07-12 09:04:03.769107] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:28.627 pt2 00:37:28.627 09:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:28.627 09:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:28.627 09:04:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:28.885 [2024-07-12 09:04:04.028468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:28.885 [2024-07-12 09:04:04.028534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:28.885 [2024-07-12 09:04:04.028558] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:37:28.885 [2024-07-12 09:04:04.028594] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:28.885 [2024-07-12 09:04:04.028934] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:28.886 [2024-07-12 09:04:04.028974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:28.886 [2024-07-12 09:04:04.029050] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:28.886 [2024-07-12 09:04:04.029073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:28.886 pt3 00:37:28.886 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:28.886 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:28.886 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:29.144 [2024-07-12 09:04:04.220482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:29.144 [2024-07-12 09:04:04.220538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:29.144 [2024-07-12 09:04:04.220567] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:37:29.144 [2024-07-12 09:04:04.220613] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:29.144 [2024-07-12 09:04:04.220970] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:29.144 [2024-07-12 09:04:04.221013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:29.144 [2024-07-12 09:04:04.221089] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:29.144 [2024-07-12 09:04:04.221120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:29.144 [2024-07-12 09:04:04.221250] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:37:29.144 [2024-07-12 09:04:04.221269] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:29.144 [2024-07-12 09:04:04.221358] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:37:29.144 [2024-07-12 09:04:04.226590] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:37:29.144 [2024-07-12 09:04:04.226613] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:37:29.144 pt4 00:37:29.144 [2024-07-12 09:04:04.226763] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:29.144 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.145 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.403 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:29.403 "name": "raid_bdev1", 00:37:29.403 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:29.403 "strip_size_kb": 64, 00:37:29.403 "state": "online", 00:37:29.403 "raid_level": "raid5f", 00:37:29.403 "superblock": true, 00:37:29.403 "num_base_bdevs": 4, 00:37:29.403 "num_base_bdevs_discovered": 4, 00:37:29.403 "num_base_bdevs_operational": 4, 00:37:29.403 "base_bdevs_list": [ 00:37:29.403 { 00:37:29.403 "name": "pt1", 00:37:29.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:29.403 "is_configured": true, 00:37:29.403 "data_offset": 2048, 00:37:29.403 "data_size": 63488 00:37:29.403 }, 00:37:29.403 { 00:37:29.403 "name": "pt2", 00:37:29.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:29.403 "is_configured": true, 00:37:29.403 "data_offset": 2048, 00:37:29.403 "data_size": 63488 00:37:29.403 }, 00:37:29.403 { 00:37:29.403 "name": "pt3", 00:37:29.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:29.403 "is_configured": true, 00:37:29.403 "data_offset": 2048, 00:37:29.403 "data_size": 63488 00:37:29.403 }, 00:37:29.403 { 00:37:29.403 "name": "pt4", 00:37:29.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:29.403 "is_configured": true, 00:37:29.403 "data_offset": 2048, 00:37:29.403 "data_size": 63488 00:37:29.403 } 00:37:29.403 ] 00:37:29.403 }' 00:37:29.403 09:04:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:29.403 09:04:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:29.970 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:30.229 [2024-07-12 09:04:05.369520] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:30.229 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:30.229 "name": "raid_bdev1", 00:37:30.229 "aliases": [ 00:37:30.229 "511cb381-d556-4949-96c1-170fcaec6e67" 00:37:30.229 ], 00:37:30.229 "product_name": "Raid Volume", 00:37:30.229 "block_size": 512, 00:37:30.229 "num_blocks": 190464, 00:37:30.229 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:30.229 "assigned_rate_limits": { 00:37:30.229 "rw_ios_per_sec": 0, 00:37:30.229 "rw_mbytes_per_sec": 0, 00:37:30.229 "r_mbytes_per_sec": 0, 00:37:30.229 "w_mbytes_per_sec": 0 00:37:30.229 }, 00:37:30.229 "claimed": false, 00:37:30.229 "zoned": false, 00:37:30.229 "supported_io_types": { 00:37:30.229 "read": true, 00:37:30.229 "write": true, 00:37:30.229 "unmap": false, 00:37:30.229 "flush": false, 00:37:30.229 "reset": true, 00:37:30.229 "nvme_admin": false, 00:37:30.229 "nvme_io": false, 00:37:30.229 "nvme_io_md": false, 00:37:30.229 "write_zeroes": true, 00:37:30.229 "zcopy": false, 00:37:30.229 "get_zone_info": false, 00:37:30.229 "zone_management": false, 00:37:30.229 "zone_append": false, 00:37:30.229 "compare": false, 00:37:30.229 "compare_and_write": false, 00:37:30.229 "abort": false, 00:37:30.229 "seek_hole": false, 00:37:30.229 "seek_data": false, 00:37:30.229 "copy": false, 00:37:30.229 "nvme_iov_md": false 00:37:30.229 }, 00:37:30.229 "driver_specific": { 00:37:30.229 "raid": { 00:37:30.229 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:30.229 "strip_size_kb": 64, 00:37:30.229 "state": "online", 00:37:30.229 "raid_level": "raid5f", 00:37:30.229 "superblock": true, 00:37:30.229 "num_base_bdevs": 4, 00:37:30.229 "num_base_bdevs_discovered": 4, 00:37:30.229 "num_base_bdevs_operational": 4, 00:37:30.229 "base_bdevs_list": [ 00:37:30.229 { 00:37:30.229 "name": "pt1", 00:37:30.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:30.229 "is_configured": true, 00:37:30.229 "data_offset": 2048, 00:37:30.229 "data_size": 63488 00:37:30.229 }, 00:37:30.229 { 00:37:30.229 "name": "pt2", 00:37:30.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:30.229 "is_configured": true, 00:37:30.229 "data_offset": 2048, 00:37:30.229 "data_size": 63488 00:37:30.229 }, 00:37:30.229 { 00:37:30.229 "name": "pt3", 00:37:30.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:30.229 "is_configured": true, 00:37:30.229 "data_offset": 2048, 00:37:30.229 "data_size": 63488 00:37:30.229 }, 00:37:30.229 { 00:37:30.230 "name": "pt4", 00:37:30.230 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:30.230 "is_configured": true, 00:37:30.230 "data_offset": 2048, 00:37:30.230 "data_size": 63488 00:37:30.230 } 00:37:30.230 ] 00:37:30.230 } 00:37:30.230 } 00:37:30.230 }' 00:37:30.230 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:30.488 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:30.488 pt2 00:37:30.488 pt3 00:37:30.488 pt4' 00:37:30.488 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:30.488 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:30.488 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:30.747 "name": "pt1", 00:37:30.747 "aliases": [ 00:37:30.747 "00000000-0000-0000-0000-000000000001" 00:37:30.747 ], 00:37:30.747 "product_name": "passthru", 00:37:30.747 "block_size": 512, 00:37:30.747 "num_blocks": 65536, 00:37:30.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:30.747 "assigned_rate_limits": { 00:37:30.747 "rw_ios_per_sec": 0, 00:37:30.747 "rw_mbytes_per_sec": 0, 00:37:30.747 "r_mbytes_per_sec": 0, 00:37:30.747 "w_mbytes_per_sec": 0 00:37:30.747 }, 00:37:30.747 "claimed": true, 00:37:30.747 "claim_type": "exclusive_write", 00:37:30.747 "zoned": false, 00:37:30.747 "supported_io_types": { 00:37:30.747 "read": true, 00:37:30.747 "write": true, 00:37:30.747 "unmap": true, 00:37:30.747 "flush": true, 00:37:30.747 "reset": true, 00:37:30.747 "nvme_admin": false, 00:37:30.747 "nvme_io": false, 00:37:30.747 "nvme_io_md": false, 00:37:30.747 "write_zeroes": true, 00:37:30.747 "zcopy": true, 00:37:30.747 "get_zone_info": false, 00:37:30.747 "zone_management": false, 00:37:30.747 "zone_append": false, 00:37:30.747 "compare": false, 00:37:30.747 "compare_and_write": false, 00:37:30.747 "abort": true, 00:37:30.747 "seek_hole": false, 00:37:30.747 "seek_data": false, 00:37:30.747 "copy": true, 00:37:30.747 "nvme_iov_md": false 00:37:30.747 }, 00:37:30.747 "memory_domains": [ 00:37:30.747 { 00:37:30.747 "dma_device_id": "system", 00:37:30.747 "dma_device_type": 1 00:37:30.747 }, 00:37:30.747 { 00:37:30.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:30.747 "dma_device_type": 2 00:37:30.747 } 00:37:30.747 ], 00:37:30.747 "driver_specific": { 00:37:30.747 "passthru": { 00:37:30.747 "name": "pt1", 00:37:30.747 "base_bdev_name": "malloc1" 00:37:30.747 } 00:37:30.747 } 00:37:30.747 }' 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:30.747 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.006 09:04:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.006 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:31.006 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.006 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.006 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:31.006 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:31.006 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:31.006 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:31.265 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:31.265 "name": "pt2", 00:37:31.265 "aliases": [ 00:37:31.265 "00000000-0000-0000-0000-000000000002" 00:37:31.265 ], 00:37:31.265 "product_name": "passthru", 00:37:31.265 "block_size": 512, 00:37:31.265 "num_blocks": 65536, 00:37:31.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:31.265 "assigned_rate_limits": { 00:37:31.265 "rw_ios_per_sec": 0, 00:37:31.265 "rw_mbytes_per_sec": 0, 00:37:31.266 "r_mbytes_per_sec": 0, 00:37:31.266 "w_mbytes_per_sec": 0 00:37:31.266 }, 00:37:31.266 "claimed": true, 00:37:31.266 "claim_type": "exclusive_write", 00:37:31.266 "zoned": false, 00:37:31.266 "supported_io_types": { 00:37:31.266 "read": true, 00:37:31.266 "write": true, 00:37:31.266 "unmap": true, 00:37:31.266 "flush": true, 00:37:31.266 "reset": true, 00:37:31.266 "nvme_admin": false, 00:37:31.266 "nvme_io": false, 00:37:31.266 "nvme_io_md": false, 00:37:31.266 "write_zeroes": true, 00:37:31.266 "zcopy": true, 00:37:31.266 "get_zone_info": false, 00:37:31.266 "zone_management": false, 00:37:31.266 "zone_append": false, 00:37:31.266 "compare": false, 00:37:31.266 "compare_and_write": false, 00:37:31.266 "abort": true, 00:37:31.266 "seek_hole": false, 00:37:31.266 "seek_data": false, 00:37:31.266 "copy": true, 00:37:31.266 "nvme_iov_md": false 00:37:31.266 }, 00:37:31.266 "memory_domains": [ 00:37:31.266 { 00:37:31.266 "dma_device_id": "system", 00:37:31.266 "dma_device_type": 1 00:37:31.266 }, 00:37:31.266 { 00:37:31.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.266 "dma_device_type": 2 00:37:31.266 } 00:37:31.266 ], 00:37:31.266 "driver_specific": { 00:37:31.266 "passthru": { 00:37:31.266 "name": "pt2", 00:37:31.266 "base_bdev_name": "malloc2" 00:37:31.266 } 00:37:31.266 } 00:37:31.266 }' 00:37:31.266 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.266 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.525 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:31.525 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.525 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.525 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:31.525 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.525 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.783 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:31.783 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.783 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.783 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:31.783 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:31.783 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:37:31.783 09:04:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:32.042 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:32.042 "name": "pt3", 00:37:32.042 "aliases": [ 00:37:32.042 "00000000-0000-0000-0000-000000000003" 00:37:32.042 ], 00:37:32.042 "product_name": "passthru", 00:37:32.042 "block_size": 512, 00:37:32.042 "num_blocks": 65536, 00:37:32.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:32.042 "assigned_rate_limits": { 00:37:32.042 "rw_ios_per_sec": 0, 00:37:32.042 "rw_mbytes_per_sec": 0, 00:37:32.042 "r_mbytes_per_sec": 0, 00:37:32.042 "w_mbytes_per_sec": 0 00:37:32.042 }, 00:37:32.042 "claimed": true, 00:37:32.042 "claim_type": "exclusive_write", 00:37:32.042 "zoned": false, 00:37:32.042 "supported_io_types": { 00:37:32.042 "read": true, 00:37:32.042 "write": true, 00:37:32.042 "unmap": true, 00:37:32.042 "flush": true, 00:37:32.042 "reset": true, 00:37:32.042 "nvme_admin": false, 00:37:32.042 "nvme_io": false, 00:37:32.042 "nvme_io_md": false, 00:37:32.042 "write_zeroes": true, 00:37:32.042 "zcopy": true, 00:37:32.042 "get_zone_info": false, 00:37:32.042 "zone_management": false, 00:37:32.042 "zone_append": false, 00:37:32.042 "compare": false, 00:37:32.042 "compare_and_write": false, 00:37:32.042 "abort": true, 00:37:32.042 "seek_hole": false, 00:37:32.042 "seek_data": false, 00:37:32.042 "copy": true, 00:37:32.042 "nvme_iov_md": false 00:37:32.042 }, 00:37:32.042 "memory_domains": [ 00:37:32.042 { 00:37:32.042 "dma_device_id": "system", 00:37:32.042 "dma_device_type": 1 00:37:32.042 }, 00:37:32.042 { 00:37:32.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:32.042 "dma_device_type": 2 00:37:32.042 } 00:37:32.042 ], 00:37:32.042 "driver_specific": { 00:37:32.042 "passthru": { 00:37:32.042 "name": "pt3", 00:37:32.042 "base_bdev_name": "malloc3" 00:37:32.042 } 00:37:32.042 } 00:37:32.042 }' 00:37:32.042 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:32.042 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:32.042 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:32.042 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:32.300 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:32.300 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:32.300 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:32.300 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:32.300 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:32.300 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:32.559 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:32.559 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:32.559 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:32.559 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:37:32.559 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:32.818 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:32.818 "name": "pt4", 00:37:32.818 "aliases": [ 00:37:32.818 "00000000-0000-0000-0000-000000000004" 00:37:32.818 ], 00:37:32.818 "product_name": "passthru", 00:37:32.818 "block_size": 512, 00:37:32.818 "num_blocks": 65536, 00:37:32.818 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:32.818 "assigned_rate_limits": { 00:37:32.818 "rw_ios_per_sec": 0, 00:37:32.818 "rw_mbytes_per_sec": 0, 00:37:32.818 "r_mbytes_per_sec": 0, 00:37:32.818 "w_mbytes_per_sec": 0 00:37:32.818 }, 00:37:32.818 "claimed": true, 00:37:32.818 "claim_type": "exclusive_write", 00:37:32.818 "zoned": false, 00:37:32.818 "supported_io_types": { 00:37:32.818 "read": true, 00:37:32.818 "write": true, 00:37:32.818 "unmap": true, 00:37:32.818 "flush": true, 00:37:32.818 "reset": true, 00:37:32.818 "nvme_admin": false, 00:37:32.818 "nvme_io": false, 00:37:32.818 "nvme_io_md": false, 00:37:32.818 "write_zeroes": true, 00:37:32.818 "zcopy": true, 00:37:32.818 "get_zone_info": false, 00:37:32.818 "zone_management": false, 00:37:32.818 "zone_append": false, 00:37:32.818 "compare": false, 00:37:32.818 "compare_and_write": false, 00:37:32.818 "abort": true, 00:37:32.818 "seek_hole": false, 00:37:32.818 "seek_data": false, 00:37:32.818 "copy": true, 00:37:32.818 "nvme_iov_md": false 00:37:32.818 }, 00:37:32.818 "memory_domains": [ 00:37:32.818 { 00:37:32.818 "dma_device_id": "system", 00:37:32.818 "dma_device_type": 1 00:37:32.818 }, 00:37:32.818 { 00:37:32.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:32.818 "dma_device_type": 2 00:37:32.818 } 00:37:32.818 ], 00:37:32.818 "driver_specific": { 00:37:32.818 "passthru": { 00:37:32.818 "name": "pt4", 00:37:32.818 "base_bdev_name": "malloc4" 00:37:32.818 } 00:37:32.818 } 00:37:32.818 }' 00:37:32.818 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:32.818 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:32.818 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:37:32.818 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:32.818 09:04:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:33.077 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:37:33.077 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:33.077 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:33.077 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:37:33.077 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:33.077 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:33.335 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:37:33.335 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:33.335 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:37:33.335 [2024-07-12 09:04:08.518105] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 511cb381-d556-4949-96c1-170fcaec6e67 '!=' 511cb381-d556-4949-96c1-170fcaec6e67 ']' 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:33.595 [2024-07-12 09:04:08.762026] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.595 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.854 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:33.854 "name": "raid_bdev1", 00:37:33.854 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:33.854 "strip_size_kb": 64, 00:37:33.854 "state": "online", 00:37:33.854 "raid_level": "raid5f", 00:37:33.854 "superblock": true, 00:37:33.854 "num_base_bdevs": 4, 00:37:33.854 "num_base_bdevs_discovered": 3, 00:37:33.854 "num_base_bdevs_operational": 3, 00:37:33.854 "base_bdevs_list": [ 00:37:33.854 { 00:37:33.854 "name": null, 00:37:33.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.854 "is_configured": false, 00:37:33.854 "data_offset": 2048, 00:37:33.854 "data_size": 63488 00:37:33.854 }, 00:37:33.854 { 00:37:33.854 "name": "pt2", 00:37:33.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:33.854 "is_configured": true, 00:37:33.854 "data_offset": 2048, 00:37:33.854 "data_size": 63488 00:37:33.854 }, 00:37:33.854 { 00:37:33.854 "name": "pt3", 00:37:33.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:33.854 "is_configured": true, 00:37:33.854 "data_offset": 2048, 00:37:33.854 "data_size": 63488 00:37:33.854 }, 00:37:33.854 { 00:37:33.854 "name": "pt4", 00:37:33.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:33.854 "is_configured": true, 00:37:33.854 "data_offset": 2048, 00:37:33.854 "data_size": 63488 00:37:33.854 } 00:37:33.854 ] 00:37:33.854 }' 00:37:33.854 09:04:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:33.854 09:04:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.424 09:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:34.683 [2024-07-12 09:04:09.838328] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:34.683 [2024-07-12 09:04:09.838368] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:34.683 [2024-07-12 09:04:09.838449] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:34.683 [2024-07-12 09:04:09.838526] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:34.683 [2024-07-12 09:04:09.838538] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:37:34.683 09:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.683 09:04:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:37:34.942 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:37:34.942 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:37:34.942 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:37:34.942 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:34.942 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:35.200 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:35.200 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:35.200 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:37:35.459 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:35.459 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:35.459 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:37:35.718 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:35.718 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:35.718 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:37:35.718 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:35.718 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:35.976 [2024-07-12 09:04:10.950466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:35.976 [2024-07-12 09:04:10.950555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:35.976 [2024-07-12 09:04:10.950589] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:37:35.976 [2024-07-12 09:04:10.950676] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:35.976 [2024-07-12 09:04:10.953202] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:35.976 [2024-07-12 09:04:10.953248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:35.976 [2024-07-12 09:04:10.953419] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:35.976 [2024-07-12 09:04:10.953483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:35.976 pt2 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:35.976 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.977 09:04:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.235 09:04:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:36.235 "name": "raid_bdev1", 00:37:36.235 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:36.235 "strip_size_kb": 64, 00:37:36.235 "state": "configuring", 00:37:36.235 "raid_level": "raid5f", 00:37:36.235 "superblock": true, 00:37:36.235 "num_base_bdevs": 4, 00:37:36.235 "num_base_bdevs_discovered": 1, 00:37:36.235 "num_base_bdevs_operational": 3, 00:37:36.235 "base_bdevs_list": [ 00:37:36.235 { 00:37:36.235 "name": null, 00:37:36.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.235 "is_configured": false, 00:37:36.235 "data_offset": 2048, 00:37:36.235 "data_size": 63488 00:37:36.235 }, 00:37:36.235 { 00:37:36.235 "name": "pt2", 00:37:36.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:36.235 "is_configured": true, 00:37:36.235 "data_offset": 2048, 00:37:36.235 "data_size": 63488 00:37:36.235 }, 00:37:36.235 { 00:37:36.235 "name": null, 00:37:36.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:36.235 "is_configured": false, 00:37:36.235 "data_offset": 2048, 00:37:36.235 "data_size": 63488 00:37:36.235 }, 00:37:36.235 { 00:37:36.235 "name": null, 00:37:36.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:36.235 "is_configured": false, 00:37:36.235 "data_offset": 2048, 00:37:36.235 "data_size": 63488 00:37:36.235 } 00:37:36.235 ] 00:37:36.235 }' 00:37:36.235 09:04:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:36.235 09:04:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.801 09:04:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:37:36.801 09:04:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:36.801 09:04:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:37.060 [2024-07-12 09:04:12.154658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:37.060 [2024-07-12 09:04:12.154767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:37.060 [2024-07-12 09:04:12.154811] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:37:37.060 [2024-07-12 09:04:12.154853] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:37.060 [2024-07-12 09:04:12.155371] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:37.060 [2024-07-12 09:04:12.155402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:37.060 [2024-07-12 09:04:12.155503] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:37.060 [2024-07-12 09:04:12.155533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:37.060 pt3 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:37.060 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:37.320 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:37.320 "name": "raid_bdev1", 00:37:37.320 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:37.320 "strip_size_kb": 64, 00:37:37.320 "state": "configuring", 00:37:37.320 "raid_level": "raid5f", 00:37:37.320 "superblock": true, 00:37:37.320 "num_base_bdevs": 4, 00:37:37.320 "num_base_bdevs_discovered": 2, 00:37:37.320 "num_base_bdevs_operational": 3, 00:37:37.320 "base_bdevs_list": [ 00:37:37.320 { 00:37:37.320 "name": null, 00:37:37.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:37.320 "is_configured": false, 00:37:37.320 "data_offset": 2048, 00:37:37.320 "data_size": 63488 00:37:37.320 }, 00:37:37.320 { 00:37:37.320 "name": "pt2", 00:37:37.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:37.320 "is_configured": true, 00:37:37.320 "data_offset": 2048, 00:37:37.320 "data_size": 63488 00:37:37.320 }, 00:37:37.320 { 00:37:37.320 "name": "pt3", 00:37:37.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:37.320 "is_configured": true, 00:37:37.320 "data_offset": 2048, 00:37:37.320 "data_size": 63488 00:37:37.320 }, 00:37:37.320 { 00:37:37.320 "name": null, 00:37:37.320 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:37.320 "is_configured": false, 00:37:37.320 "data_offset": 2048, 00:37:37.320 "data_size": 63488 00:37:37.320 } 00:37:37.320 ] 00:37:37.320 }' 00:37:37.320 09:04:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:37.320 09:04:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:38.274 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:37:38.274 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:38.274 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:37:38.274 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:38.274 [2024-07-12 09:04:13.406880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:38.274 [2024-07-12 09:04:13.406970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:38.274 [2024-07-12 09:04:13.407010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:37:38.274 [2024-07-12 09:04:13.407031] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:38.274 [2024-07-12 09:04:13.407492] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:38.274 [2024-07-12 09:04:13.407535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:38.274 [2024-07-12 09:04:13.407667] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:38.274 [2024-07-12 09:04:13.407702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:38.274 [2024-07-12 09:04:13.407836] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:37:38.274 [2024-07-12 09:04:13.407860] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:38.274 [2024-07-12 09:04:13.407957] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:37:38.274 [2024-07-12 09:04:13.413195] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:37:38.275 [2024-07-12 09:04:13.413221] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:37:38.275 [2024-07-12 09:04:13.413484] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:38.275 pt4 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.275 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.532 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:38.532 "name": "raid_bdev1", 00:37:38.532 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:38.532 "strip_size_kb": 64, 00:37:38.532 "state": "online", 00:37:38.532 "raid_level": "raid5f", 00:37:38.532 "superblock": true, 00:37:38.532 "num_base_bdevs": 4, 00:37:38.532 "num_base_bdevs_discovered": 3, 00:37:38.532 "num_base_bdevs_operational": 3, 00:37:38.532 "base_bdevs_list": [ 00:37:38.532 { 00:37:38.532 "name": null, 00:37:38.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:38.532 "is_configured": false, 00:37:38.532 "data_offset": 2048, 00:37:38.532 "data_size": 63488 00:37:38.532 }, 00:37:38.532 { 00:37:38.532 "name": "pt2", 00:37:38.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:38.532 "is_configured": true, 00:37:38.532 "data_offset": 2048, 00:37:38.532 "data_size": 63488 00:37:38.532 }, 00:37:38.532 { 00:37:38.532 "name": "pt3", 00:37:38.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:38.532 "is_configured": true, 00:37:38.532 "data_offset": 2048, 00:37:38.532 "data_size": 63488 00:37:38.532 }, 00:37:38.532 { 00:37:38.532 "name": "pt4", 00:37:38.532 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:38.532 "is_configured": true, 00:37:38.532 "data_offset": 2048, 00:37:38.532 "data_size": 63488 00:37:38.532 } 00:37:38.532 ] 00:37:38.532 }' 00:37:38.532 09:04:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:38.532 09:04:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:39.465 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:39.465 [2024-07-12 09:04:14.527689] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:39.465 [2024-07-12 09:04:14.527720] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:39.465 [2024-07-12 09:04:14.527784] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:39.465 [2024-07-12 09:04:14.527851] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:39.466 [2024-07-12 09:04:14.527861] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:37:39.466 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.466 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:37:39.724 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:37:39.724 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:37:39.724 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:37:39.724 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:37:39.724 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:37:39.724 09:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:39.982 [2024-07-12 09:04:15.148599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:39.982 [2024-07-12 09:04:15.148673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:39.982 [2024-07-12 09:04:15.148711] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:37:39.982 [2024-07-12 09:04:15.148776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:39.982 [2024-07-12 09:04:15.150918] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:39.982 [2024-07-12 09:04:15.150970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:39.982 [2024-07-12 09:04:15.151078] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:39.982 [2024-07-12 09:04:15.151136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:39.982 [2024-07-12 09:04:15.151283] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:39.982 [2024-07-12 09:04:15.151304] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:39.982 [2024-07-12 09:04:15.151330] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state configuring 00:37:39.982 [2024-07-12 09:04:15.151410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:39.982 [2024-07-12 09:04:15.151547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:39.982 pt1 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.982 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:40.239 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:40.239 "name": "raid_bdev1", 00:37:40.239 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:40.239 "strip_size_kb": 64, 00:37:40.239 "state": "configuring", 00:37:40.239 "raid_level": "raid5f", 00:37:40.239 "superblock": true, 00:37:40.239 "num_base_bdevs": 4, 00:37:40.239 "num_base_bdevs_discovered": 2, 00:37:40.239 "num_base_bdevs_operational": 3, 00:37:40.239 "base_bdevs_list": [ 00:37:40.239 { 00:37:40.239 "name": null, 00:37:40.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.239 "is_configured": false, 00:37:40.239 "data_offset": 2048, 00:37:40.239 "data_size": 63488 00:37:40.239 }, 00:37:40.239 { 00:37:40.239 "name": "pt2", 00:37:40.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:40.239 "is_configured": true, 00:37:40.239 "data_offset": 2048, 00:37:40.239 "data_size": 63488 00:37:40.239 }, 00:37:40.239 { 00:37:40.239 "name": "pt3", 00:37:40.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:40.239 "is_configured": true, 00:37:40.239 "data_offset": 2048, 00:37:40.239 "data_size": 63488 00:37:40.239 }, 00:37:40.239 { 00:37:40.239 "name": null, 00:37:40.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:40.239 "is_configured": false, 00:37:40.239 "data_offset": 2048, 00:37:40.239 "data_size": 63488 00:37:40.239 } 00:37:40.239 ] 00:37:40.239 }' 00:37:40.239 09:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:40.239 09:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.174 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:37:41.174 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:41.174 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:37:41.174 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:41.432 [2024-07-12 09:04:16.504862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:41.432 [2024-07-12 09:04:16.504921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:41.432 [2024-07-12 09:04:16.504953] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:37:41.432 [2024-07-12 09:04:16.504999] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:41.432 [2024-07-12 09:04:16.505417] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:41.432 [2024-07-12 09:04:16.505453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:41.432 [2024-07-12 09:04:16.505535] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:41.432 [2024-07-12 09:04:16.505560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:41.432 [2024-07-12 09:04:16.505698] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:37:41.432 [2024-07-12 09:04:16.505711] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:41.432 [2024-07-12 09:04:16.505803] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:37:41.432 [2024-07-12 09:04:16.511172] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:37:41.432 [2024-07-12 09:04:16.511195] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:37:41.432 [2024-07-12 09:04:16.511412] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:41.432 pt4 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.432 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.690 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:41.690 "name": "raid_bdev1", 00:37:41.690 "uuid": "511cb381-d556-4949-96c1-170fcaec6e67", 00:37:41.690 "strip_size_kb": 64, 00:37:41.690 "state": "online", 00:37:41.690 "raid_level": "raid5f", 00:37:41.690 "superblock": true, 00:37:41.690 "num_base_bdevs": 4, 00:37:41.690 "num_base_bdevs_discovered": 3, 00:37:41.690 "num_base_bdevs_operational": 3, 00:37:41.690 "base_bdevs_list": [ 00:37:41.690 { 00:37:41.690 "name": null, 00:37:41.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:41.690 "is_configured": false, 00:37:41.690 "data_offset": 2048, 00:37:41.690 "data_size": 63488 00:37:41.690 }, 00:37:41.690 { 00:37:41.690 "name": "pt2", 00:37:41.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:41.690 "is_configured": true, 00:37:41.690 "data_offset": 2048, 00:37:41.690 "data_size": 63488 00:37:41.690 }, 00:37:41.690 { 00:37:41.690 "name": "pt3", 00:37:41.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:41.690 "is_configured": true, 00:37:41.690 "data_offset": 2048, 00:37:41.690 "data_size": 63488 00:37:41.690 }, 00:37:41.690 { 00:37:41.690 "name": "pt4", 00:37:41.690 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:41.690 "is_configured": true, 00:37:41.690 "data_offset": 2048, 00:37:41.690 "data_size": 63488 00:37:41.690 } 00:37:41.690 ] 00:37:41.690 }' 00:37:41.690 09:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:41.690 09:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.624 09:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:37:42.624 09:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:42.624 09:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:37:42.624 09:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:37:42.624 09:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:42.883 [2024-07-12 09:04:17.938657] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 511cb381-d556-4949-96c1-170fcaec6e67 '!=' 511cb381-d556-4949-96c1-170fcaec6e67 ']' 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 159278 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 159278 ']' 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 159278 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159278 00:37:42.883 killing process with pid 159278 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159278' 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 159278 00:37:42.883 09:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 159278 00:37:42.883 [2024-07-12 09:04:17.967529] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:42.883 [2024-07-12 09:04:17.967602] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:42.883 [2024-07-12 09:04:17.967709] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:42.883 [2024-07-12 09:04:17.967730] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:37:43.141 [2024-07-12 09:04:18.221416] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:44.125 ************************************ 00:37:44.125 END TEST raid5f_superblock_test 00:37:44.125 ************************************ 00:37:44.125 09:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:37:44.125 00:37:44.125 real 0m26.853s 00:37:44.125 user 0m50.709s 00:37:44.125 sys 0m2.789s 00:37:44.125 09:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:44.125 09:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.125 09:04:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:44.125 09:04:19 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:37:44.125 09:04:19 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:37:44.125 09:04:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:37:44.125 09:04:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:44.125 09:04:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:44.125 ************************************ 00:37:44.125 START TEST raid5f_rebuild_test 00:37:44.125 ************************************ 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 false false true 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=160159 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 160159 /var/tmp/spdk-raid.sock 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 160159 ']' 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:44.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:44.125 09:04:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.125 [2024-07-12 09:04:19.267070] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:37:44.125 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:44.125 Zero copy mechanism will not be used. 00:37:44.126 [2024-07-12 09:04:19.267271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160159 ] 00:37:44.394 [2024-07-12 09:04:19.438228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.653 [2024-07-12 09:04:19.693103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.912 [2024-07-12 09:04:19.882449] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:45.171 09:04:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:45.171 09:04:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:37:45.171 09:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:45.171 09:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:45.430 BaseBdev1_malloc 00:37:45.430 09:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:45.690 [2024-07-12 09:04:20.668454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:45.690 [2024-07-12 09:04:20.668559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:45.690 [2024-07-12 09:04:20.668600] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:37:45.690 [2024-07-12 09:04:20.668621] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:45.690 [2024-07-12 09:04:20.670842] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:45.690 [2024-07-12 09:04:20.670887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:45.690 BaseBdev1 00:37:45.690 09:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:45.690 09:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:45.949 BaseBdev2_malloc 00:37:45.949 09:04:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:46.207 [2024-07-12 09:04:21.162131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:46.207 [2024-07-12 09:04:21.162224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:46.207 [2024-07-12 09:04:21.162266] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:37:46.207 [2024-07-12 09:04:21.162287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:46.207 [2024-07-12 09:04:21.164474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:46.207 [2024-07-12 09:04:21.164523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:46.207 BaseBdev2 00:37:46.207 09:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:46.207 09:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:46.466 BaseBdev3_malloc 00:37:46.466 09:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:37:46.466 [2024-07-12 09:04:21.643760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:37:46.466 [2024-07-12 09:04:21.643878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:46.466 [2024-07-12 09:04:21.643919] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:37:46.466 [2024-07-12 09:04:21.643945] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:46.466 [2024-07-12 09:04:21.645961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:46.466 [2024-07-12 09:04:21.646013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:46.466 BaseBdev3 00:37:46.466 09:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:46.466 09:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:37:47.032 BaseBdev4_malloc 00:37:47.032 09:04:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:37:47.032 [2024-07-12 09:04:22.129701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:37:47.032 [2024-07-12 09:04:22.129828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:47.032 [2024-07-12 09:04:22.129869] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:47.032 [2024-07-12 09:04:22.129896] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:47.032 [2024-07-12 09:04:22.131841] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:47.032 [2024-07-12 09:04:22.131906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:37:47.032 BaseBdev4 00:37:47.032 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:37:47.290 spare_malloc 00:37:47.290 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:47.549 spare_delay 00:37:47.549 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:47.809 [2024-07-12 09:04:22.790643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:47.809 [2024-07-12 09:04:22.790734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:47.809 [2024-07-12 09:04:22.790766] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:37:47.809 [2024-07-12 09:04:22.790795] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:47.809 [2024-07-12 09:04:22.792986] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:47.809 [2024-07-12 09:04:22.793038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:47.809 spare 00:37:47.809 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:37:47.809 [2024-07-12 09:04:22.986717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:47.809 [2024-07-12 09:04:22.988252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:47.809 [2024-07-12 09:04:22.988348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:47.809 [2024-07-12 09:04:22.988404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:47.809 [2024-07-12 09:04:22.988506] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:37:47.809 [2024-07-12 09:04:22.988519] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:37:47.809 [2024-07-12 09:04:22.988640] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:47.809 [2024-07-12 09:04:22.993824] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:37:47.809 [2024-07-12 09:04:22.993848] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:37:47.809 [2024-07-12 09:04:22.994013] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:48.066 09:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:48.066 09:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.066 09:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.066 09:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:48.066 "name": "raid_bdev1", 00:37:48.066 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:48.066 "strip_size_kb": 64, 00:37:48.066 "state": "online", 00:37:48.066 "raid_level": "raid5f", 00:37:48.066 "superblock": false, 00:37:48.066 "num_base_bdevs": 4, 00:37:48.066 "num_base_bdevs_discovered": 4, 00:37:48.066 "num_base_bdevs_operational": 4, 00:37:48.066 "base_bdevs_list": [ 00:37:48.066 { 00:37:48.066 "name": "BaseBdev1", 00:37:48.066 "uuid": "ddee7140-b6f7-535d-a284-9fde97ec9fdb", 00:37:48.066 "is_configured": true, 00:37:48.066 "data_offset": 0, 00:37:48.066 "data_size": 65536 00:37:48.066 }, 00:37:48.066 { 00:37:48.066 "name": "BaseBdev2", 00:37:48.066 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:48.066 "is_configured": true, 00:37:48.066 "data_offset": 0, 00:37:48.066 "data_size": 65536 00:37:48.066 }, 00:37:48.066 { 00:37:48.067 "name": "BaseBdev3", 00:37:48.067 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:48.067 "is_configured": true, 00:37:48.067 "data_offset": 0, 00:37:48.067 "data_size": 65536 00:37:48.067 }, 00:37:48.067 { 00:37:48.067 "name": "BaseBdev4", 00:37:48.067 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:48.067 "is_configured": true, 00:37:48.067 "data_offset": 0, 00:37:48.067 "data_size": 65536 00:37:48.067 } 00:37:48.067 ] 00:37:48.067 }' 00:37:48.067 09:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:48.067 09:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:49.000 09:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:49.000 09:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:37:49.000 [2024-07-12 09:04:24.137110] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:49.000 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:37:49.000 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.000 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:49.258 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:49.517 [2024-07-12 09:04:24.581082] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:49.517 /dev/nbd0 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:49.517 1+0 records in 00:37:49.517 1+0 records out 00:37:49.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349158 s, 11.7 MB/s 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:37:49.517 09:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:37:50.083 512+0 records in 00:37:50.083 512+0 records out 00:37:50.083 100663296 bytes (101 MB, 96 MiB) copied, 0.528369 s, 191 MB/s 00:37:50.083 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:50.083 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:50.083 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:50.083 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:50.083 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:50.083 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:50.083 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:37:50.340 [2024-07-12 09:04:25.392813] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:50.340 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:50.599 [2024-07-12 09:04:25.672976] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:50.599 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:50.857 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:50.857 "name": "raid_bdev1", 00:37:50.857 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:50.857 "strip_size_kb": 64, 00:37:50.857 "state": "online", 00:37:50.857 "raid_level": "raid5f", 00:37:50.857 "superblock": false, 00:37:50.857 "num_base_bdevs": 4, 00:37:50.857 "num_base_bdevs_discovered": 3, 00:37:50.857 "num_base_bdevs_operational": 3, 00:37:50.857 "base_bdevs_list": [ 00:37:50.857 { 00:37:50.857 "name": null, 00:37:50.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:50.857 "is_configured": false, 00:37:50.857 "data_offset": 0, 00:37:50.857 "data_size": 65536 00:37:50.857 }, 00:37:50.857 { 00:37:50.857 "name": "BaseBdev2", 00:37:50.857 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:50.857 "is_configured": true, 00:37:50.857 "data_offset": 0, 00:37:50.857 "data_size": 65536 00:37:50.857 }, 00:37:50.857 { 00:37:50.857 "name": "BaseBdev3", 00:37:50.857 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:50.857 "is_configured": true, 00:37:50.857 "data_offset": 0, 00:37:50.857 "data_size": 65536 00:37:50.857 }, 00:37:50.857 { 00:37:50.857 "name": "BaseBdev4", 00:37:50.857 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:50.857 "is_configured": true, 00:37:50.857 "data_offset": 0, 00:37:50.857 "data_size": 65536 00:37:50.857 } 00:37:50.857 ] 00:37:50.857 }' 00:37:50.857 09:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:50.857 09:04:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.423 09:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:51.680 [2024-07-12 09:04:26.757169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:51.680 [2024-07-12 09:04:26.767370] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d7d0 00:37:51.680 [2024-07-12 09:04:26.774008] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:51.680 09:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:37:52.614 09:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:52.614 09:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:52.614 09:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:52.614 09:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:52.614 09:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:52.614 09:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:52.614 09:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:52.872 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:52.872 "name": "raid_bdev1", 00:37:52.872 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:52.872 "strip_size_kb": 64, 00:37:52.872 "state": "online", 00:37:52.872 "raid_level": "raid5f", 00:37:52.872 "superblock": false, 00:37:52.872 "num_base_bdevs": 4, 00:37:52.872 "num_base_bdevs_discovered": 4, 00:37:52.872 "num_base_bdevs_operational": 4, 00:37:52.872 "process": { 00:37:52.872 "type": "rebuild", 00:37:52.872 "target": "spare", 00:37:52.872 "progress": { 00:37:52.872 "blocks": 23040, 00:37:52.872 "percent": 11 00:37:52.872 } 00:37:52.872 }, 00:37:52.872 "base_bdevs_list": [ 00:37:52.872 { 00:37:52.872 "name": "spare", 00:37:52.872 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:37:52.872 "is_configured": true, 00:37:52.872 "data_offset": 0, 00:37:52.872 "data_size": 65536 00:37:52.872 }, 00:37:52.872 { 00:37:52.872 "name": "BaseBdev2", 00:37:52.872 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:52.872 "is_configured": true, 00:37:52.872 "data_offset": 0, 00:37:52.872 "data_size": 65536 00:37:52.872 }, 00:37:52.872 { 00:37:52.872 "name": "BaseBdev3", 00:37:52.872 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:52.872 "is_configured": true, 00:37:52.872 "data_offset": 0, 00:37:52.872 "data_size": 65536 00:37:52.872 }, 00:37:52.872 { 00:37:52.872 "name": "BaseBdev4", 00:37:52.872 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:52.872 "is_configured": true, 00:37:52.872 "data_offset": 0, 00:37:52.872 "data_size": 65536 00:37:52.872 } 00:37:52.872 ] 00:37:52.872 }' 00:37:52.872 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:53.131 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:53.131 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:53.131 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:53.131 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:53.131 [2024-07-12 09:04:28.307554] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:53.389 [2024-07-12 09:04:28.386298] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:53.389 [2024-07-12 09:04:28.386377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:53.389 [2024-07-12 09:04:28.386397] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:53.389 [2024-07-12 09:04:28.386405] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:53.389 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:53.646 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:53.646 "name": "raid_bdev1", 00:37:53.646 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:53.646 "strip_size_kb": 64, 00:37:53.646 "state": "online", 00:37:53.646 "raid_level": "raid5f", 00:37:53.646 "superblock": false, 00:37:53.646 "num_base_bdevs": 4, 00:37:53.646 "num_base_bdevs_discovered": 3, 00:37:53.646 "num_base_bdevs_operational": 3, 00:37:53.646 "base_bdevs_list": [ 00:37:53.646 { 00:37:53.646 "name": null, 00:37:53.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:53.646 "is_configured": false, 00:37:53.646 "data_offset": 0, 00:37:53.646 "data_size": 65536 00:37:53.646 }, 00:37:53.646 { 00:37:53.646 "name": "BaseBdev2", 00:37:53.646 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:53.646 "is_configured": true, 00:37:53.646 "data_offset": 0, 00:37:53.646 "data_size": 65536 00:37:53.646 }, 00:37:53.646 { 00:37:53.646 "name": "BaseBdev3", 00:37:53.646 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:53.646 "is_configured": true, 00:37:53.646 "data_offset": 0, 00:37:53.646 "data_size": 65536 00:37:53.646 }, 00:37:53.646 { 00:37:53.646 "name": "BaseBdev4", 00:37:53.646 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:53.646 "is_configured": true, 00:37:53.646 "data_offset": 0, 00:37:53.646 "data_size": 65536 00:37:53.646 } 00:37:53.646 ] 00:37:53.646 }' 00:37:53.646 09:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:53.646 09:04:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:54.212 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:54.212 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:54.212 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:54.212 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:54.212 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:54.212 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:54.212 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.469 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:54.469 "name": "raid_bdev1", 00:37:54.469 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:54.469 "strip_size_kb": 64, 00:37:54.469 "state": "online", 00:37:54.469 "raid_level": "raid5f", 00:37:54.469 "superblock": false, 00:37:54.469 "num_base_bdevs": 4, 00:37:54.469 "num_base_bdevs_discovered": 3, 00:37:54.469 "num_base_bdevs_operational": 3, 00:37:54.469 "base_bdevs_list": [ 00:37:54.469 { 00:37:54.469 "name": null, 00:37:54.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:54.469 "is_configured": false, 00:37:54.469 "data_offset": 0, 00:37:54.469 "data_size": 65536 00:37:54.469 }, 00:37:54.469 { 00:37:54.469 "name": "BaseBdev2", 00:37:54.469 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:54.469 "is_configured": true, 00:37:54.469 "data_offset": 0, 00:37:54.469 "data_size": 65536 00:37:54.469 }, 00:37:54.469 { 00:37:54.469 "name": "BaseBdev3", 00:37:54.469 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:54.469 "is_configured": true, 00:37:54.469 "data_offset": 0, 00:37:54.469 "data_size": 65536 00:37:54.469 }, 00:37:54.469 { 00:37:54.469 "name": "BaseBdev4", 00:37:54.469 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:54.469 "is_configured": true, 00:37:54.469 "data_offset": 0, 00:37:54.469 "data_size": 65536 00:37:54.469 } 00:37:54.469 ] 00:37:54.469 }' 00:37:54.469 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:54.726 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:54.726 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:54.726 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:54.726 09:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:54.982 [2024-07-12 09:04:29.987593] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:54.982 [2024-07-12 09:04:29.997431] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d970 00:37:54.982 [2024-07-12 09:04:30.004164] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:54.982 09:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:55.915 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:55.915 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:55.915 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:55.915 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:55.915 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:55.915 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:55.915 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.172 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:56.172 "name": "raid_bdev1", 00:37:56.172 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:56.172 "strip_size_kb": 64, 00:37:56.172 "state": "online", 00:37:56.172 "raid_level": "raid5f", 00:37:56.172 "superblock": false, 00:37:56.172 "num_base_bdevs": 4, 00:37:56.172 "num_base_bdevs_discovered": 4, 00:37:56.172 "num_base_bdevs_operational": 4, 00:37:56.172 "process": { 00:37:56.172 "type": "rebuild", 00:37:56.172 "target": "spare", 00:37:56.172 "progress": { 00:37:56.172 "blocks": 23040, 00:37:56.172 "percent": 11 00:37:56.172 } 00:37:56.172 }, 00:37:56.172 "base_bdevs_list": [ 00:37:56.172 { 00:37:56.172 "name": "spare", 00:37:56.172 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:37:56.172 "is_configured": true, 00:37:56.172 "data_offset": 0, 00:37:56.172 "data_size": 65536 00:37:56.172 }, 00:37:56.172 { 00:37:56.172 "name": "BaseBdev2", 00:37:56.172 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:56.172 "is_configured": true, 00:37:56.172 "data_offset": 0, 00:37:56.172 "data_size": 65536 00:37:56.172 }, 00:37:56.172 { 00:37:56.172 "name": "BaseBdev3", 00:37:56.172 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:56.172 "is_configured": true, 00:37:56.172 "data_offset": 0, 00:37:56.172 "data_size": 65536 00:37:56.172 }, 00:37:56.172 { 00:37:56.172 "name": "BaseBdev4", 00:37:56.172 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:56.172 "is_configured": true, 00:37:56.173 "data_offset": 0, 00:37:56.173 "data_size": 65536 00:37:56.173 } 00:37:56.173 ] 00:37:56.173 }' 00:37:56.173 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:56.173 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:56.173 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1370 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:56.431 "name": "raid_bdev1", 00:37:56.431 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:56.431 "strip_size_kb": 64, 00:37:56.431 "state": "online", 00:37:56.431 "raid_level": "raid5f", 00:37:56.431 "superblock": false, 00:37:56.431 "num_base_bdevs": 4, 00:37:56.431 "num_base_bdevs_discovered": 4, 00:37:56.431 "num_base_bdevs_operational": 4, 00:37:56.431 "process": { 00:37:56.431 "type": "rebuild", 00:37:56.431 "target": "spare", 00:37:56.431 "progress": { 00:37:56.431 "blocks": 28800, 00:37:56.431 "percent": 14 00:37:56.431 } 00:37:56.431 }, 00:37:56.431 "base_bdevs_list": [ 00:37:56.431 { 00:37:56.431 "name": "spare", 00:37:56.431 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:37:56.431 "is_configured": true, 00:37:56.431 "data_offset": 0, 00:37:56.431 "data_size": 65536 00:37:56.431 }, 00:37:56.431 { 00:37:56.431 "name": "BaseBdev2", 00:37:56.431 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:56.431 "is_configured": true, 00:37:56.431 "data_offset": 0, 00:37:56.431 "data_size": 65536 00:37:56.431 }, 00:37:56.431 { 00:37:56.431 "name": "BaseBdev3", 00:37:56.431 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:56.431 "is_configured": true, 00:37:56.431 "data_offset": 0, 00:37:56.431 "data_size": 65536 00:37:56.431 }, 00:37:56.431 { 00:37:56.431 "name": "BaseBdev4", 00:37:56.431 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:56.431 "is_configured": true, 00:37:56.431 "data_offset": 0, 00:37:56.431 "data_size": 65536 00:37:56.431 } 00:37:56.431 ] 00:37:56.431 }' 00:37:56.431 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:56.688 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:56.688 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:56.688 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:56.688 09:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:57.620 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:57.879 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:57.879 "name": "raid_bdev1", 00:37:57.879 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:57.879 "strip_size_kb": 64, 00:37:57.879 "state": "online", 00:37:57.879 "raid_level": "raid5f", 00:37:57.879 "superblock": false, 00:37:57.879 "num_base_bdevs": 4, 00:37:57.879 "num_base_bdevs_discovered": 4, 00:37:57.879 "num_base_bdevs_operational": 4, 00:37:57.879 "process": { 00:37:57.879 "type": "rebuild", 00:37:57.879 "target": "spare", 00:37:57.879 "progress": { 00:37:57.879 "blocks": 53760, 00:37:57.879 "percent": 27 00:37:57.879 } 00:37:57.879 }, 00:37:57.879 "base_bdevs_list": [ 00:37:57.879 { 00:37:57.879 "name": "spare", 00:37:57.879 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:37:57.879 "is_configured": true, 00:37:57.879 "data_offset": 0, 00:37:57.879 "data_size": 65536 00:37:57.879 }, 00:37:57.879 { 00:37:57.879 "name": "BaseBdev2", 00:37:57.879 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:57.879 "is_configured": true, 00:37:57.879 "data_offset": 0, 00:37:57.879 "data_size": 65536 00:37:57.879 }, 00:37:57.879 { 00:37:57.879 "name": "BaseBdev3", 00:37:57.879 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:57.879 "is_configured": true, 00:37:57.879 "data_offset": 0, 00:37:57.879 "data_size": 65536 00:37:57.879 }, 00:37:57.879 { 00:37:57.879 "name": "BaseBdev4", 00:37:57.879 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:57.879 "is_configured": true, 00:37:57.879 "data_offset": 0, 00:37:57.879 "data_size": 65536 00:37:57.879 } 00:37:57.879 ] 00:37:57.879 }' 00:37:57.879 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:57.879 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:57.879 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:57.879 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:57.879 09:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.811 09:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.069 09:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:59.069 "name": "raid_bdev1", 00:37:59.069 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:37:59.069 "strip_size_kb": 64, 00:37:59.069 "state": "online", 00:37:59.069 "raid_level": "raid5f", 00:37:59.069 "superblock": false, 00:37:59.069 "num_base_bdevs": 4, 00:37:59.069 "num_base_bdevs_discovered": 4, 00:37:59.069 "num_base_bdevs_operational": 4, 00:37:59.069 "process": { 00:37:59.069 "type": "rebuild", 00:37:59.069 "target": "spare", 00:37:59.069 "progress": { 00:37:59.069 "blocks": 80640, 00:37:59.070 "percent": 41 00:37:59.070 } 00:37:59.070 }, 00:37:59.070 "base_bdevs_list": [ 00:37:59.070 { 00:37:59.070 "name": "spare", 00:37:59.070 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:37:59.070 "is_configured": true, 00:37:59.070 "data_offset": 0, 00:37:59.070 "data_size": 65536 00:37:59.070 }, 00:37:59.070 { 00:37:59.070 "name": "BaseBdev2", 00:37:59.070 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:37:59.070 "is_configured": true, 00:37:59.070 "data_offset": 0, 00:37:59.070 "data_size": 65536 00:37:59.070 }, 00:37:59.070 { 00:37:59.070 "name": "BaseBdev3", 00:37:59.070 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:37:59.070 "is_configured": true, 00:37:59.070 "data_offset": 0, 00:37:59.070 "data_size": 65536 00:37:59.070 }, 00:37:59.070 { 00:37:59.070 "name": "BaseBdev4", 00:37:59.070 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:37:59.070 "is_configured": true, 00:37:59.070 "data_offset": 0, 00:37:59.070 "data_size": 65536 00:37:59.070 } 00:37:59.070 ] 00:37:59.070 }' 00:37:59.070 09:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:59.328 09:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:59.328 09:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:59.328 09:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:59.328 09:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:00.262 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.520 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:00.520 "name": "raid_bdev1", 00:38:00.520 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:38:00.520 "strip_size_kb": 64, 00:38:00.520 "state": "online", 00:38:00.520 "raid_level": "raid5f", 00:38:00.520 "superblock": false, 00:38:00.520 "num_base_bdevs": 4, 00:38:00.520 "num_base_bdevs_discovered": 4, 00:38:00.520 "num_base_bdevs_operational": 4, 00:38:00.520 "process": { 00:38:00.520 "type": "rebuild", 00:38:00.520 "target": "spare", 00:38:00.520 "progress": { 00:38:00.520 "blocks": 105600, 00:38:00.520 "percent": 53 00:38:00.520 } 00:38:00.520 }, 00:38:00.520 "base_bdevs_list": [ 00:38:00.520 { 00:38:00.520 "name": "spare", 00:38:00.520 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:38:00.520 "is_configured": true, 00:38:00.520 "data_offset": 0, 00:38:00.520 "data_size": 65536 00:38:00.520 }, 00:38:00.520 { 00:38:00.520 "name": "BaseBdev2", 00:38:00.520 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:38:00.520 "is_configured": true, 00:38:00.520 "data_offset": 0, 00:38:00.520 "data_size": 65536 00:38:00.520 }, 00:38:00.520 { 00:38:00.520 "name": "BaseBdev3", 00:38:00.520 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:38:00.520 "is_configured": true, 00:38:00.520 "data_offset": 0, 00:38:00.520 "data_size": 65536 00:38:00.520 }, 00:38:00.520 { 00:38:00.520 "name": "BaseBdev4", 00:38:00.520 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:38:00.520 "is_configured": true, 00:38:00.520 "data_offset": 0, 00:38:00.520 "data_size": 65536 00:38:00.520 } 00:38:00.520 ] 00:38:00.520 }' 00:38:00.520 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:00.520 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:00.520 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:00.779 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:00.779 09:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:01.715 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:01.973 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:01.973 "name": "raid_bdev1", 00:38:01.973 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:38:01.973 "strip_size_kb": 64, 00:38:01.973 "state": "online", 00:38:01.973 "raid_level": "raid5f", 00:38:01.973 "superblock": false, 00:38:01.973 "num_base_bdevs": 4, 00:38:01.973 "num_base_bdevs_discovered": 4, 00:38:01.973 "num_base_bdevs_operational": 4, 00:38:01.973 "process": { 00:38:01.973 "type": "rebuild", 00:38:01.973 "target": "spare", 00:38:01.973 "progress": { 00:38:01.973 "blocks": 130560, 00:38:01.973 "percent": 66 00:38:01.973 } 00:38:01.973 }, 00:38:01.973 "base_bdevs_list": [ 00:38:01.973 { 00:38:01.973 "name": "spare", 00:38:01.973 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:38:01.973 "is_configured": true, 00:38:01.973 "data_offset": 0, 00:38:01.973 "data_size": 65536 00:38:01.973 }, 00:38:01.973 { 00:38:01.973 "name": "BaseBdev2", 00:38:01.973 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:38:01.973 "is_configured": true, 00:38:01.973 "data_offset": 0, 00:38:01.973 "data_size": 65536 00:38:01.973 }, 00:38:01.973 { 00:38:01.973 "name": "BaseBdev3", 00:38:01.973 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:38:01.973 "is_configured": true, 00:38:01.973 "data_offset": 0, 00:38:01.973 "data_size": 65536 00:38:01.973 }, 00:38:01.973 { 00:38:01.973 "name": "BaseBdev4", 00:38:01.973 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:38:01.973 "is_configured": true, 00:38:01.973 "data_offset": 0, 00:38:01.973 "data_size": 65536 00:38:01.973 } 00:38:01.973 ] 00:38:01.973 }' 00:38:01.973 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:01.973 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:01.973 09:04:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:01.973 09:04:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:01.973 09:04:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:02.906 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:03.165 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:03.165 "name": "raid_bdev1", 00:38:03.165 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:38:03.165 "strip_size_kb": 64, 00:38:03.165 "state": "online", 00:38:03.165 "raid_level": "raid5f", 00:38:03.165 "superblock": false, 00:38:03.165 "num_base_bdevs": 4, 00:38:03.165 "num_base_bdevs_discovered": 4, 00:38:03.165 "num_base_bdevs_operational": 4, 00:38:03.165 "process": { 00:38:03.165 "type": "rebuild", 00:38:03.165 "target": "spare", 00:38:03.165 "progress": { 00:38:03.165 "blocks": 157440, 00:38:03.165 "percent": 80 00:38:03.165 } 00:38:03.165 }, 00:38:03.165 "base_bdevs_list": [ 00:38:03.165 { 00:38:03.165 "name": "spare", 00:38:03.165 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:38:03.165 "is_configured": true, 00:38:03.165 "data_offset": 0, 00:38:03.165 "data_size": 65536 00:38:03.165 }, 00:38:03.165 { 00:38:03.165 "name": "BaseBdev2", 00:38:03.165 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:38:03.165 "is_configured": true, 00:38:03.165 "data_offset": 0, 00:38:03.165 "data_size": 65536 00:38:03.165 }, 00:38:03.165 { 00:38:03.165 "name": "BaseBdev3", 00:38:03.165 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:38:03.165 "is_configured": true, 00:38:03.165 "data_offset": 0, 00:38:03.165 "data_size": 65536 00:38:03.165 }, 00:38:03.165 { 00:38:03.165 "name": "BaseBdev4", 00:38:03.165 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:38:03.165 "is_configured": true, 00:38:03.165 "data_offset": 0, 00:38:03.165 "data_size": 65536 00:38:03.165 } 00:38:03.165 ] 00:38:03.165 }' 00:38:03.165 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:03.165 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:03.165 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:03.423 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:03.423 09:04:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:04.358 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.615 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:04.615 "name": "raid_bdev1", 00:38:04.615 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:38:04.615 "strip_size_kb": 64, 00:38:04.615 "state": "online", 00:38:04.615 "raid_level": "raid5f", 00:38:04.615 "superblock": false, 00:38:04.615 "num_base_bdevs": 4, 00:38:04.615 "num_base_bdevs_discovered": 4, 00:38:04.615 "num_base_bdevs_operational": 4, 00:38:04.615 "process": { 00:38:04.615 "type": "rebuild", 00:38:04.615 "target": "spare", 00:38:04.615 "progress": { 00:38:04.615 "blocks": 182400, 00:38:04.615 "percent": 92 00:38:04.615 } 00:38:04.615 }, 00:38:04.615 "base_bdevs_list": [ 00:38:04.615 { 00:38:04.615 "name": "spare", 00:38:04.615 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:38:04.615 "is_configured": true, 00:38:04.615 "data_offset": 0, 00:38:04.615 "data_size": 65536 00:38:04.615 }, 00:38:04.615 { 00:38:04.615 "name": "BaseBdev2", 00:38:04.615 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:38:04.615 "is_configured": true, 00:38:04.615 "data_offset": 0, 00:38:04.615 "data_size": 65536 00:38:04.615 }, 00:38:04.615 { 00:38:04.615 "name": "BaseBdev3", 00:38:04.615 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:38:04.615 "is_configured": true, 00:38:04.615 "data_offset": 0, 00:38:04.615 "data_size": 65536 00:38:04.615 }, 00:38:04.615 { 00:38:04.615 "name": "BaseBdev4", 00:38:04.615 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:38:04.615 "is_configured": true, 00:38:04.615 "data_offset": 0, 00:38:04.615 "data_size": 65536 00:38:04.615 } 00:38:04.615 ] 00:38:04.615 }' 00:38:04.615 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:04.615 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:04.615 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:04.615 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:04.615 09:04:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:05.182 [2024-07-12 09:04:40.372033] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:05.182 [2024-07-12 09:04:40.372096] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:05.182 [2024-07-12 09:04:40.372171] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.749 09:04:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:06.007 "name": "raid_bdev1", 00:38:06.007 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:38:06.007 "strip_size_kb": 64, 00:38:06.007 "state": "online", 00:38:06.007 "raid_level": "raid5f", 00:38:06.007 "superblock": false, 00:38:06.007 "num_base_bdevs": 4, 00:38:06.007 "num_base_bdevs_discovered": 4, 00:38:06.007 "num_base_bdevs_operational": 4, 00:38:06.007 "base_bdevs_list": [ 00:38:06.007 { 00:38:06.007 "name": "spare", 00:38:06.007 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:38:06.007 "is_configured": true, 00:38:06.007 "data_offset": 0, 00:38:06.007 "data_size": 65536 00:38:06.007 }, 00:38:06.007 { 00:38:06.007 "name": "BaseBdev2", 00:38:06.007 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:38:06.007 "is_configured": true, 00:38:06.007 "data_offset": 0, 00:38:06.007 "data_size": 65536 00:38:06.007 }, 00:38:06.007 { 00:38:06.007 "name": "BaseBdev3", 00:38:06.007 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:38:06.007 "is_configured": true, 00:38:06.007 "data_offset": 0, 00:38:06.007 "data_size": 65536 00:38:06.007 }, 00:38:06.007 { 00:38:06.007 "name": "BaseBdev4", 00:38:06.007 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:38:06.007 "is_configured": true, 00:38:06.007 "data_offset": 0, 00:38:06.007 "data_size": 65536 00:38:06.007 } 00:38:06.007 ] 00:38:06.007 }' 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:06.007 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.266 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:06.266 "name": "raid_bdev1", 00:38:06.266 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:38:06.266 "strip_size_kb": 64, 00:38:06.266 "state": "online", 00:38:06.266 "raid_level": "raid5f", 00:38:06.266 "superblock": false, 00:38:06.266 "num_base_bdevs": 4, 00:38:06.266 "num_base_bdevs_discovered": 4, 00:38:06.266 "num_base_bdevs_operational": 4, 00:38:06.266 "base_bdevs_list": [ 00:38:06.266 { 00:38:06.266 "name": "spare", 00:38:06.266 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:38:06.266 "is_configured": true, 00:38:06.266 "data_offset": 0, 00:38:06.266 "data_size": 65536 00:38:06.266 }, 00:38:06.266 { 00:38:06.266 "name": "BaseBdev2", 00:38:06.266 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:38:06.266 "is_configured": true, 00:38:06.266 "data_offset": 0, 00:38:06.266 "data_size": 65536 00:38:06.266 }, 00:38:06.266 { 00:38:06.266 "name": "BaseBdev3", 00:38:06.266 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:38:06.266 "is_configured": true, 00:38:06.266 "data_offset": 0, 00:38:06.266 "data_size": 65536 00:38:06.266 }, 00:38:06.266 { 00:38:06.266 "name": "BaseBdev4", 00:38:06.266 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:38:06.266 "is_configured": true, 00:38:06.266 "data_offset": 0, 00:38:06.266 "data_size": 65536 00:38:06.266 } 00:38:06.266 ] 00:38:06.266 }' 00:38:06.266 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:06.266 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:06.266 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:06.525 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.790 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:06.790 "name": "raid_bdev1", 00:38:06.790 "uuid": "39bdcfd7-610b-436a-9822-3a8b3d3e191f", 00:38:06.790 "strip_size_kb": 64, 00:38:06.790 "state": "online", 00:38:06.790 "raid_level": "raid5f", 00:38:06.790 "superblock": false, 00:38:06.790 "num_base_bdevs": 4, 00:38:06.790 "num_base_bdevs_discovered": 4, 00:38:06.790 "num_base_bdevs_operational": 4, 00:38:06.790 "base_bdevs_list": [ 00:38:06.790 { 00:38:06.790 "name": "spare", 00:38:06.790 "uuid": "83460ef6-dc00-52b4-9589-662964b73c41", 00:38:06.790 "is_configured": true, 00:38:06.790 "data_offset": 0, 00:38:06.790 "data_size": 65536 00:38:06.790 }, 00:38:06.790 { 00:38:06.790 "name": "BaseBdev2", 00:38:06.790 "uuid": "cef055fd-3043-58c8-932d-60f494abf0a6", 00:38:06.790 "is_configured": true, 00:38:06.790 "data_offset": 0, 00:38:06.790 "data_size": 65536 00:38:06.790 }, 00:38:06.790 { 00:38:06.790 "name": "BaseBdev3", 00:38:06.790 "uuid": "7bc35b36-4085-5b09-b799-d4c6ad2d57b7", 00:38:06.790 "is_configured": true, 00:38:06.790 "data_offset": 0, 00:38:06.790 "data_size": 65536 00:38:06.790 }, 00:38:06.790 { 00:38:06.790 "name": "BaseBdev4", 00:38:06.790 "uuid": "8b18dd18-83e8-5634-9660-7a298de501c6", 00:38:06.790 "is_configured": true, 00:38:06.790 "data_offset": 0, 00:38:06.790 "data_size": 65536 00:38:06.790 } 00:38:06.790 ] 00:38:06.790 }' 00:38:06.790 09:04:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:06.790 09:04:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:07.722 09:04:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:07.722 [2024-07-12 09:04:42.798166] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:07.722 [2024-07-12 09:04:42.798200] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:07.722 [2024-07-12 09:04:42.798286] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:07.722 [2024-07-12 09:04:42.798378] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:07.722 [2024-07-12 09:04:42.798390] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:38:07.722 09:04:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:07.722 09:04:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:07.982 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:08.240 /dev/nbd0 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:08.240 1+0 records in 00:38:08.240 1+0 records out 00:38:08.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440819 s, 9.3 MB/s 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:08.240 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:38:08.498 /dev/nbd1 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:08.498 1+0 records in 00:38:08.498 1+0 records out 00:38:08.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504146 s, 8.1 MB/s 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:08.498 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:38:08.757 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:38:08.757 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:08.757 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:08.757 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:08.757 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:08.757 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:08.757 09:04:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:09.016 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:38:09.275 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:09.275 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:09.275 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:09.275 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:09.275 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:09.275 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:09.275 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 160159 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 160159 ']' 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 160159 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160159 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160159' 00:38:09.535 killing process with pid 160159 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 160159 00:38:09.535 09:04:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 160159 00:38:09.535 Received shutdown signal, test time was about 60.000000 seconds 00:38:09.535 00:38:09.535 Latency(us) 00:38:09.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.535 =================================================================================================================== 00:38:09.535 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:09.535 [2024-07-12 09:04:44.525410] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:09.794 [2024-07-12 09:04:44.901314] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:11.173 09:04:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:38:11.173 ************************************ 00:38:11.173 END TEST raid5f_rebuild_test 00:38:11.173 ************************************ 00:38:11.173 00:38:11.173 real 0m26.798s 00:38:11.173 user 0m39.658s 00:38:11.173 sys 0m2.739s 00:38:11.173 09:04:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:11.173 09:04:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:11.173 09:04:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:11.173 09:04:46 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:38:11.173 09:04:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:38:11.173 09:04:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:11.173 09:04:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:11.173 ************************************ 00:38:11.173 START TEST raid5f_rebuild_test_sb 00:38:11.173 ************************************ 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 true false true 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:38:11.173 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=160843 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 160843 /var/tmp/spdk-raid.sock 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 160843 ']' 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:11.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:11.174 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:11.174 [2024-07-12 09:04:46.132156] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:38:11.174 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:11.174 Zero copy mechanism will not be used. 00:38:11.174 [2024-07-12 09:04:46.132378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160843 ] 00:38:11.174 [2024-07-12 09:04:46.296757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.433 [2024-07-12 09:04:46.484006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.693 [2024-07-12 09:04:46.680876] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:11.951 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:11.951 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:38:11.951 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:11.951 09:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:12.211 BaseBdev1_malloc 00:38:12.211 09:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:12.470 [2024-07-12 09:04:47.438005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:12.470 [2024-07-12 09:04:47.438138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:12.470 [2024-07-12 09:04:47.438176] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:38:12.470 [2024-07-12 09:04:47.438196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:12.470 [2024-07-12 09:04:47.440412] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:12.470 [2024-07-12 09:04:47.440456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:12.470 BaseBdev1 00:38:12.470 09:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:12.470 09:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:12.728 BaseBdev2_malloc 00:38:12.728 09:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:12.728 [2024-07-12 09:04:47.877401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:12.728 [2024-07-12 09:04:47.877516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:12.728 [2024-07-12 09:04:47.877554] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:38:12.728 [2024-07-12 09:04:47.877573] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:12.728 [2024-07-12 09:04:47.879878] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:12.728 [2024-07-12 09:04:47.879923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:12.728 BaseBdev2 00:38:12.728 09:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:12.728 09:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:12.987 BaseBdev3_malloc 00:38:12.987 09:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:13.245 [2024-07-12 09:04:48.343395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:13.245 [2024-07-12 09:04:48.343487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:13.245 [2024-07-12 09:04:48.343522] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:38:13.245 [2024-07-12 09:04:48.343550] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:13.245 [2024-07-12 09:04:48.345788] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:13.245 [2024-07-12 09:04:48.345843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:13.245 BaseBdev3 00:38:13.245 09:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:13.245 09:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:13.504 BaseBdev4_malloc 00:38:13.504 09:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:13.763 [2024-07-12 09:04:48.798484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:13.763 [2024-07-12 09:04:48.798576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:13.763 [2024-07-12 09:04:48.798612] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:13.763 [2024-07-12 09:04:48.798637] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:13.763 [2024-07-12 09:04:48.800896] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:13.763 [2024-07-12 09:04:48.800945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:13.763 BaseBdev4 00:38:13.763 09:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:38:14.022 spare_malloc 00:38:14.022 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:14.280 spare_delay 00:38:14.280 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:14.280 [2024-07-12 09:04:49.445872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:14.280 [2024-07-12 09:04:49.445965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:14.280 [2024-07-12 09:04:49.445997] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:38:14.280 [2024-07-12 09:04:49.446029] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:14.280 [2024-07-12 09:04:49.448282] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:14.280 [2024-07-12 09:04:49.448342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:14.280 spare 00:38:14.280 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:38:14.538 [2024-07-12 09:04:49.682024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:14.539 [2024-07-12 09:04:49.683986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:14.539 [2024-07-12 09:04:49.684069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:14.539 [2024-07-12 09:04:49.684124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:14.539 [2024-07-12 09:04:49.684346] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:38:14.539 [2024-07-12 09:04:49.684366] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:14.539 [2024-07-12 09:04:49.684491] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:14.539 [2024-07-12 09:04:49.690491] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:38:14.539 [2024-07-12 09:04:49.690516] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:38:14.539 [2024-07-12 09:04:49.690735] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:14.539 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.797 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:14.797 "name": "raid_bdev1", 00:38:14.797 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:14.797 "strip_size_kb": 64, 00:38:14.797 "state": "online", 00:38:14.797 "raid_level": "raid5f", 00:38:14.797 "superblock": true, 00:38:14.797 "num_base_bdevs": 4, 00:38:14.797 "num_base_bdevs_discovered": 4, 00:38:14.797 "num_base_bdevs_operational": 4, 00:38:14.797 "base_bdevs_list": [ 00:38:14.797 { 00:38:14.797 "name": "BaseBdev1", 00:38:14.797 "uuid": "e82e29b8-820c-57bb-8912-063fe730a73e", 00:38:14.797 "is_configured": true, 00:38:14.797 "data_offset": 2048, 00:38:14.797 "data_size": 63488 00:38:14.797 }, 00:38:14.797 { 00:38:14.797 "name": "BaseBdev2", 00:38:14.797 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:14.797 "is_configured": true, 00:38:14.797 "data_offset": 2048, 00:38:14.797 "data_size": 63488 00:38:14.797 }, 00:38:14.797 { 00:38:14.797 "name": "BaseBdev3", 00:38:14.797 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:14.797 "is_configured": true, 00:38:14.797 "data_offset": 2048, 00:38:14.797 "data_size": 63488 00:38:14.797 }, 00:38:14.797 { 00:38:14.797 "name": "BaseBdev4", 00:38:14.797 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:14.797 "is_configured": true, 00:38:14.797 "data_offset": 2048, 00:38:14.797 "data_size": 63488 00:38:14.797 } 00:38:14.797 ] 00:38:14.797 }' 00:38:14.797 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:14.797 09:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.364 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:15.364 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:38:15.653 [2024-07-12 09:04:50.689928] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:15.653 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:38:15.653 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:15.653 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:15.912 09:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:16.170 [2024-07-12 09:04:51.129903] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:16.170 /dev/nbd0 00:38:16.170 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:16.170 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:16.170 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:16.171 1+0 records in 00:38:16.171 1+0 records out 00:38:16.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028836 s, 14.2 MB/s 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:38:16.171 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:38:16.738 496+0 records in 00:38:16.738 496+0 records out 00:38:16.738 97517568 bytes (98 MB, 93 MiB) copied, 0.504664 s, 193 MB/s 00:38:16.738 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:38:16.738 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:16.738 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:16.738 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:16.738 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:16.738 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:16.738 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:16.996 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:16.996 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:16.996 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:16.996 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:16.996 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:16.996 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:16.996 09:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:38:16.996 [2024-07-12 09:04:51.979128] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:16.996 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:38:16.996 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:16.996 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:16.996 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:16.997 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:16.997 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:17.256 [2024-07-12 09:04:52.338357] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:17.256 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:17.515 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:17.515 "name": "raid_bdev1", 00:38:17.515 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:17.515 "strip_size_kb": 64, 00:38:17.515 "state": "online", 00:38:17.515 "raid_level": "raid5f", 00:38:17.515 "superblock": true, 00:38:17.515 "num_base_bdevs": 4, 00:38:17.515 "num_base_bdevs_discovered": 3, 00:38:17.515 "num_base_bdevs_operational": 3, 00:38:17.515 "base_bdevs_list": [ 00:38:17.515 { 00:38:17.515 "name": null, 00:38:17.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:17.515 "is_configured": false, 00:38:17.515 "data_offset": 2048, 00:38:17.515 "data_size": 63488 00:38:17.515 }, 00:38:17.515 { 00:38:17.515 "name": "BaseBdev2", 00:38:17.515 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:17.515 "is_configured": true, 00:38:17.515 "data_offset": 2048, 00:38:17.515 "data_size": 63488 00:38:17.515 }, 00:38:17.515 { 00:38:17.515 "name": "BaseBdev3", 00:38:17.515 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:17.515 "is_configured": true, 00:38:17.515 "data_offset": 2048, 00:38:17.515 "data_size": 63488 00:38:17.515 }, 00:38:17.515 { 00:38:17.515 "name": "BaseBdev4", 00:38:17.515 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:17.515 "is_configured": true, 00:38:17.515 "data_offset": 2048, 00:38:17.515 "data_size": 63488 00:38:17.515 } 00:38:17.515 ] 00:38:17.515 }' 00:38:17.515 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:17.515 09:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:18.449 09:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:18.449 [2024-07-12 09:04:53.546695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:18.449 [2024-07-12 09:04:53.557930] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c520 00:38:18.449 [2024-07-12 09:04:53.565501] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:18.449 09:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:38:19.383 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:19.383 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:19.383 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:19.383 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:19.383 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:19.383 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:19.383 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:19.643 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:19.643 "name": "raid_bdev1", 00:38:19.643 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:19.643 "strip_size_kb": 64, 00:38:19.643 "state": "online", 00:38:19.643 "raid_level": "raid5f", 00:38:19.643 "superblock": true, 00:38:19.643 "num_base_bdevs": 4, 00:38:19.643 "num_base_bdevs_discovered": 4, 00:38:19.643 "num_base_bdevs_operational": 4, 00:38:19.643 "process": { 00:38:19.643 "type": "rebuild", 00:38:19.643 "target": "spare", 00:38:19.643 "progress": { 00:38:19.643 "blocks": 23040, 00:38:19.643 "percent": 12 00:38:19.643 } 00:38:19.643 }, 00:38:19.643 "base_bdevs_list": [ 00:38:19.643 { 00:38:19.643 "name": "spare", 00:38:19.643 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:19.643 "is_configured": true, 00:38:19.643 "data_offset": 2048, 00:38:19.643 "data_size": 63488 00:38:19.643 }, 00:38:19.643 { 00:38:19.643 "name": "BaseBdev2", 00:38:19.643 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:19.643 "is_configured": true, 00:38:19.643 "data_offset": 2048, 00:38:19.643 "data_size": 63488 00:38:19.643 }, 00:38:19.643 { 00:38:19.643 "name": "BaseBdev3", 00:38:19.643 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:19.643 "is_configured": true, 00:38:19.643 "data_offset": 2048, 00:38:19.643 "data_size": 63488 00:38:19.643 }, 00:38:19.643 { 00:38:19.643 "name": "BaseBdev4", 00:38:19.643 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:19.643 "is_configured": true, 00:38:19.643 "data_offset": 2048, 00:38:19.643 "data_size": 63488 00:38:19.643 } 00:38:19.643 ] 00:38:19.643 }' 00:38:19.643 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:19.901 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:19.902 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:19.902 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:19.902 09:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:20.160 [2024-07-12 09:04:55.163259] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:20.160 [2024-07-12 09:04:55.178419] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:20.160 [2024-07-12 09:04:55.178507] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:20.160 [2024-07-12 09:04:55.178527] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:20.160 [2024-07-12 09:04:55.178536] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:20.160 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.420 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:20.420 "name": "raid_bdev1", 00:38:20.420 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:20.420 "strip_size_kb": 64, 00:38:20.420 "state": "online", 00:38:20.420 "raid_level": "raid5f", 00:38:20.420 "superblock": true, 00:38:20.420 "num_base_bdevs": 4, 00:38:20.420 "num_base_bdevs_discovered": 3, 00:38:20.420 "num_base_bdevs_operational": 3, 00:38:20.420 "base_bdevs_list": [ 00:38:20.420 { 00:38:20.420 "name": null, 00:38:20.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:20.420 "is_configured": false, 00:38:20.420 "data_offset": 2048, 00:38:20.420 "data_size": 63488 00:38:20.420 }, 00:38:20.420 { 00:38:20.420 "name": "BaseBdev2", 00:38:20.420 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:20.420 "is_configured": true, 00:38:20.420 "data_offset": 2048, 00:38:20.420 "data_size": 63488 00:38:20.420 }, 00:38:20.420 { 00:38:20.420 "name": "BaseBdev3", 00:38:20.420 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:20.420 "is_configured": true, 00:38:20.420 "data_offset": 2048, 00:38:20.420 "data_size": 63488 00:38:20.420 }, 00:38:20.420 { 00:38:20.420 "name": "BaseBdev4", 00:38:20.420 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:20.420 "is_configured": true, 00:38:20.420 "data_offset": 2048, 00:38:20.420 "data_size": 63488 00:38:20.420 } 00:38:20.420 ] 00:38:20.420 }' 00:38:20.420 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:20.420 09:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:20.988 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:20.988 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:20.988 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:20.988 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:20.988 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:20.988 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:20.988 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.246 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:21.246 "name": "raid_bdev1", 00:38:21.246 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:21.246 "strip_size_kb": 64, 00:38:21.246 "state": "online", 00:38:21.246 "raid_level": "raid5f", 00:38:21.246 "superblock": true, 00:38:21.246 "num_base_bdevs": 4, 00:38:21.246 "num_base_bdevs_discovered": 3, 00:38:21.246 "num_base_bdevs_operational": 3, 00:38:21.246 "base_bdevs_list": [ 00:38:21.246 { 00:38:21.246 "name": null, 00:38:21.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.246 "is_configured": false, 00:38:21.246 "data_offset": 2048, 00:38:21.246 "data_size": 63488 00:38:21.246 }, 00:38:21.246 { 00:38:21.246 "name": "BaseBdev2", 00:38:21.246 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:21.246 "is_configured": true, 00:38:21.246 "data_offset": 2048, 00:38:21.246 "data_size": 63488 00:38:21.246 }, 00:38:21.246 { 00:38:21.246 "name": "BaseBdev3", 00:38:21.246 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:21.246 "is_configured": true, 00:38:21.246 "data_offset": 2048, 00:38:21.246 "data_size": 63488 00:38:21.246 }, 00:38:21.246 { 00:38:21.246 "name": "BaseBdev4", 00:38:21.246 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:21.246 "is_configured": true, 00:38:21.246 "data_offset": 2048, 00:38:21.246 "data_size": 63488 00:38:21.246 } 00:38:21.246 ] 00:38:21.246 }' 00:38:21.246 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:21.246 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:21.246 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:21.246 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:21.246 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:21.505 [2024-07-12 09:04:56.642456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:21.505 [2024-07-12 09:04:56.654388] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c6c0 00:38:21.505 [2024-07-12 09:04:56.662332] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:21.505 09:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:22.880 "name": "raid_bdev1", 00:38:22.880 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:22.880 "strip_size_kb": 64, 00:38:22.880 "state": "online", 00:38:22.880 "raid_level": "raid5f", 00:38:22.880 "superblock": true, 00:38:22.880 "num_base_bdevs": 4, 00:38:22.880 "num_base_bdevs_discovered": 4, 00:38:22.880 "num_base_bdevs_operational": 4, 00:38:22.880 "process": { 00:38:22.880 "type": "rebuild", 00:38:22.880 "target": "spare", 00:38:22.880 "progress": { 00:38:22.880 "blocks": 23040, 00:38:22.880 "percent": 12 00:38:22.880 } 00:38:22.880 }, 00:38:22.880 "base_bdevs_list": [ 00:38:22.880 { 00:38:22.880 "name": "spare", 00:38:22.880 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:22.880 "is_configured": true, 00:38:22.880 "data_offset": 2048, 00:38:22.880 "data_size": 63488 00:38:22.880 }, 00:38:22.880 { 00:38:22.880 "name": "BaseBdev2", 00:38:22.880 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:22.880 "is_configured": true, 00:38:22.880 "data_offset": 2048, 00:38:22.880 "data_size": 63488 00:38:22.880 }, 00:38:22.880 { 00:38:22.880 "name": "BaseBdev3", 00:38:22.880 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:22.880 "is_configured": true, 00:38:22.880 "data_offset": 2048, 00:38:22.880 "data_size": 63488 00:38:22.880 }, 00:38:22.880 { 00:38:22.880 "name": "BaseBdev4", 00:38:22.880 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:22.880 "is_configured": true, 00:38:22.880 "data_offset": 2048, 00:38:22.880 "data_size": 63488 00:38:22.880 } 00:38:22.880 ] 00:38:22.880 }' 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:22.880 09:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:22.880 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:22.880 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:38:22.880 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:38:22.880 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:38:22.880 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:38:22.880 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:38:22.880 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1397 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:22.881 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:23.138 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:23.138 "name": "raid_bdev1", 00:38:23.138 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:23.138 "strip_size_kb": 64, 00:38:23.138 "state": "online", 00:38:23.138 "raid_level": "raid5f", 00:38:23.138 "superblock": true, 00:38:23.138 "num_base_bdevs": 4, 00:38:23.138 "num_base_bdevs_discovered": 4, 00:38:23.138 "num_base_bdevs_operational": 4, 00:38:23.138 "process": { 00:38:23.138 "type": "rebuild", 00:38:23.138 "target": "spare", 00:38:23.138 "progress": { 00:38:23.138 "blocks": 28800, 00:38:23.138 "percent": 15 00:38:23.138 } 00:38:23.138 }, 00:38:23.138 "base_bdevs_list": [ 00:38:23.138 { 00:38:23.138 "name": "spare", 00:38:23.138 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:23.138 "is_configured": true, 00:38:23.138 "data_offset": 2048, 00:38:23.138 "data_size": 63488 00:38:23.138 }, 00:38:23.138 { 00:38:23.138 "name": "BaseBdev2", 00:38:23.138 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:23.138 "is_configured": true, 00:38:23.138 "data_offset": 2048, 00:38:23.138 "data_size": 63488 00:38:23.138 }, 00:38:23.138 { 00:38:23.138 "name": "BaseBdev3", 00:38:23.138 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:23.138 "is_configured": true, 00:38:23.138 "data_offset": 2048, 00:38:23.138 "data_size": 63488 00:38:23.138 }, 00:38:23.138 { 00:38:23.138 "name": "BaseBdev4", 00:38:23.138 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:23.138 "is_configured": true, 00:38:23.138 "data_offset": 2048, 00:38:23.138 "data_size": 63488 00:38:23.138 } 00:38:23.138 ] 00:38:23.138 }' 00:38:23.138 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:23.138 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:23.138 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:23.397 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:23.397 09:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:24.387 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:24.387 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:24.387 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:24.387 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:24.387 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:24.387 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:24.388 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:24.388 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.645 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:24.645 "name": "raid_bdev1", 00:38:24.645 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:24.645 "strip_size_kb": 64, 00:38:24.645 "state": "online", 00:38:24.645 "raid_level": "raid5f", 00:38:24.645 "superblock": true, 00:38:24.645 "num_base_bdevs": 4, 00:38:24.645 "num_base_bdevs_discovered": 4, 00:38:24.645 "num_base_bdevs_operational": 4, 00:38:24.645 "process": { 00:38:24.645 "type": "rebuild", 00:38:24.645 "target": "spare", 00:38:24.645 "progress": { 00:38:24.645 "blocks": 55680, 00:38:24.645 "percent": 29 00:38:24.645 } 00:38:24.645 }, 00:38:24.645 "base_bdevs_list": [ 00:38:24.645 { 00:38:24.645 "name": "spare", 00:38:24.645 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:24.645 "is_configured": true, 00:38:24.645 "data_offset": 2048, 00:38:24.645 "data_size": 63488 00:38:24.645 }, 00:38:24.645 { 00:38:24.645 "name": "BaseBdev2", 00:38:24.645 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:24.645 "is_configured": true, 00:38:24.645 "data_offset": 2048, 00:38:24.645 "data_size": 63488 00:38:24.645 }, 00:38:24.645 { 00:38:24.645 "name": "BaseBdev3", 00:38:24.645 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:24.645 "is_configured": true, 00:38:24.645 "data_offset": 2048, 00:38:24.645 "data_size": 63488 00:38:24.645 }, 00:38:24.645 { 00:38:24.645 "name": "BaseBdev4", 00:38:24.645 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:24.645 "is_configured": true, 00:38:24.646 "data_offset": 2048, 00:38:24.646 "data_size": 63488 00:38:24.646 } 00:38:24.646 ] 00:38:24.646 }' 00:38:24.646 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:24.646 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:24.646 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:24.646 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:24.646 09:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:25.578 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:25.836 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:25.836 "name": "raid_bdev1", 00:38:25.836 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:25.836 "strip_size_kb": 64, 00:38:25.836 "state": "online", 00:38:25.836 "raid_level": "raid5f", 00:38:25.836 "superblock": true, 00:38:25.836 "num_base_bdevs": 4, 00:38:25.836 "num_base_bdevs_discovered": 4, 00:38:25.836 "num_base_bdevs_operational": 4, 00:38:25.836 "process": { 00:38:25.836 "type": "rebuild", 00:38:25.836 "target": "spare", 00:38:25.836 "progress": { 00:38:25.836 "blocks": 80640, 00:38:25.836 "percent": 42 00:38:25.836 } 00:38:25.836 }, 00:38:25.836 "base_bdevs_list": [ 00:38:25.836 { 00:38:25.836 "name": "spare", 00:38:25.836 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:25.836 "is_configured": true, 00:38:25.836 "data_offset": 2048, 00:38:25.836 "data_size": 63488 00:38:25.836 }, 00:38:25.836 { 00:38:25.836 "name": "BaseBdev2", 00:38:25.836 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:25.836 "is_configured": true, 00:38:25.836 "data_offset": 2048, 00:38:25.836 "data_size": 63488 00:38:25.836 }, 00:38:25.836 { 00:38:25.836 "name": "BaseBdev3", 00:38:25.836 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:25.836 "is_configured": true, 00:38:25.836 "data_offset": 2048, 00:38:25.836 "data_size": 63488 00:38:25.836 }, 00:38:25.836 { 00:38:25.836 "name": "BaseBdev4", 00:38:25.836 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:25.836 "is_configured": true, 00:38:25.836 "data_offset": 2048, 00:38:25.836 "data_size": 63488 00:38:25.836 } 00:38:25.836 ] 00:38:25.836 }' 00:38:25.836 09:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:26.093 09:05:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:26.093 09:05:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:26.093 09:05:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:26.093 09:05:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.027 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.286 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:27.286 "name": "raid_bdev1", 00:38:27.286 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:27.286 "strip_size_kb": 64, 00:38:27.286 "state": "online", 00:38:27.286 "raid_level": "raid5f", 00:38:27.286 "superblock": true, 00:38:27.286 "num_base_bdevs": 4, 00:38:27.286 "num_base_bdevs_discovered": 4, 00:38:27.286 "num_base_bdevs_operational": 4, 00:38:27.286 "process": { 00:38:27.286 "type": "rebuild", 00:38:27.286 "target": "spare", 00:38:27.286 "progress": { 00:38:27.286 "blocks": 107520, 00:38:27.286 "percent": 56 00:38:27.286 } 00:38:27.286 }, 00:38:27.286 "base_bdevs_list": [ 00:38:27.286 { 00:38:27.286 "name": "spare", 00:38:27.286 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:27.286 "is_configured": true, 00:38:27.286 "data_offset": 2048, 00:38:27.286 "data_size": 63488 00:38:27.286 }, 00:38:27.286 { 00:38:27.286 "name": "BaseBdev2", 00:38:27.286 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:27.286 "is_configured": true, 00:38:27.286 "data_offset": 2048, 00:38:27.286 "data_size": 63488 00:38:27.286 }, 00:38:27.286 { 00:38:27.286 "name": "BaseBdev3", 00:38:27.286 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:27.286 "is_configured": true, 00:38:27.286 "data_offset": 2048, 00:38:27.286 "data_size": 63488 00:38:27.286 }, 00:38:27.286 { 00:38:27.286 "name": "BaseBdev4", 00:38:27.286 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:27.286 "is_configured": true, 00:38:27.286 "data_offset": 2048, 00:38:27.286 "data_size": 63488 00:38:27.286 } 00:38:27.286 ] 00:38:27.286 }' 00:38:27.286 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:27.286 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:27.286 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:27.544 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:27.544 09:05:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:28.477 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:28.736 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:28.736 "name": "raid_bdev1", 00:38:28.736 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:28.736 "strip_size_kb": 64, 00:38:28.736 "state": "online", 00:38:28.736 "raid_level": "raid5f", 00:38:28.736 "superblock": true, 00:38:28.736 "num_base_bdevs": 4, 00:38:28.736 "num_base_bdevs_discovered": 4, 00:38:28.736 "num_base_bdevs_operational": 4, 00:38:28.736 "process": { 00:38:28.736 "type": "rebuild", 00:38:28.736 "target": "spare", 00:38:28.736 "progress": { 00:38:28.736 "blocks": 134400, 00:38:28.736 "percent": 70 00:38:28.736 } 00:38:28.736 }, 00:38:28.736 "base_bdevs_list": [ 00:38:28.736 { 00:38:28.736 "name": "spare", 00:38:28.736 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:28.736 "is_configured": true, 00:38:28.736 "data_offset": 2048, 00:38:28.736 "data_size": 63488 00:38:28.736 }, 00:38:28.736 { 00:38:28.736 "name": "BaseBdev2", 00:38:28.736 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:28.736 "is_configured": true, 00:38:28.736 "data_offset": 2048, 00:38:28.736 "data_size": 63488 00:38:28.736 }, 00:38:28.736 { 00:38:28.736 "name": "BaseBdev3", 00:38:28.736 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:28.736 "is_configured": true, 00:38:28.736 "data_offset": 2048, 00:38:28.736 "data_size": 63488 00:38:28.736 }, 00:38:28.736 { 00:38:28.736 "name": "BaseBdev4", 00:38:28.736 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:28.736 "is_configured": true, 00:38:28.736 "data_offset": 2048, 00:38:28.736 "data_size": 63488 00:38:28.736 } 00:38:28.736 ] 00:38:28.736 }' 00:38:28.736 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:28.736 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:28.736 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:28.736 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:28.736 09:05:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.155 09:05:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.155 09:05:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:30.155 "name": "raid_bdev1", 00:38:30.155 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:30.155 "strip_size_kb": 64, 00:38:30.155 "state": "online", 00:38:30.155 "raid_level": "raid5f", 00:38:30.155 "superblock": true, 00:38:30.155 "num_base_bdevs": 4, 00:38:30.155 "num_base_bdevs_discovered": 4, 00:38:30.155 "num_base_bdevs_operational": 4, 00:38:30.155 "process": { 00:38:30.155 "type": "rebuild", 00:38:30.155 "target": "spare", 00:38:30.155 "progress": { 00:38:30.155 "blocks": 159360, 00:38:30.155 "percent": 83 00:38:30.155 } 00:38:30.155 }, 00:38:30.155 "base_bdevs_list": [ 00:38:30.155 { 00:38:30.155 "name": "spare", 00:38:30.155 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:30.155 "is_configured": true, 00:38:30.155 "data_offset": 2048, 00:38:30.155 "data_size": 63488 00:38:30.155 }, 00:38:30.155 { 00:38:30.155 "name": "BaseBdev2", 00:38:30.155 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:30.155 "is_configured": true, 00:38:30.155 "data_offset": 2048, 00:38:30.155 "data_size": 63488 00:38:30.155 }, 00:38:30.155 { 00:38:30.155 "name": "BaseBdev3", 00:38:30.155 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:30.155 "is_configured": true, 00:38:30.155 "data_offset": 2048, 00:38:30.155 "data_size": 63488 00:38:30.155 }, 00:38:30.155 { 00:38:30.155 "name": "BaseBdev4", 00:38:30.155 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:30.155 "is_configured": true, 00:38:30.155 "data_offset": 2048, 00:38:30.155 "data_size": 63488 00:38:30.155 } 00:38:30.155 ] 00:38:30.155 }' 00:38:30.155 09:05:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:30.155 09:05:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:30.155 09:05:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:30.155 09:05:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:30.155 09:05:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:31.091 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.350 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:31.350 "name": "raid_bdev1", 00:38:31.350 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:31.350 "strip_size_kb": 64, 00:38:31.350 "state": "online", 00:38:31.350 "raid_level": "raid5f", 00:38:31.350 "superblock": true, 00:38:31.350 "num_base_bdevs": 4, 00:38:31.350 "num_base_bdevs_discovered": 4, 00:38:31.350 "num_base_bdevs_operational": 4, 00:38:31.350 "process": { 00:38:31.350 "type": "rebuild", 00:38:31.350 "target": "spare", 00:38:31.350 "progress": { 00:38:31.350 "blocks": 186240, 00:38:31.350 "percent": 97 00:38:31.350 } 00:38:31.350 }, 00:38:31.350 "base_bdevs_list": [ 00:38:31.350 { 00:38:31.350 "name": "spare", 00:38:31.350 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:31.350 "is_configured": true, 00:38:31.350 "data_offset": 2048, 00:38:31.350 "data_size": 63488 00:38:31.350 }, 00:38:31.350 { 00:38:31.350 "name": "BaseBdev2", 00:38:31.350 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:31.350 "is_configured": true, 00:38:31.350 "data_offset": 2048, 00:38:31.350 "data_size": 63488 00:38:31.350 }, 00:38:31.350 { 00:38:31.350 "name": "BaseBdev3", 00:38:31.350 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:31.350 "is_configured": true, 00:38:31.350 "data_offset": 2048, 00:38:31.350 "data_size": 63488 00:38:31.350 }, 00:38:31.350 { 00:38:31.350 "name": "BaseBdev4", 00:38:31.350 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:31.350 "is_configured": true, 00:38:31.350 "data_offset": 2048, 00:38:31.350 "data_size": 63488 00:38:31.350 } 00:38:31.350 ] 00:38:31.350 }' 00:38:31.350 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:31.350 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:31.350 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:31.608 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:31.608 09:05:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:31.609 [2024-07-12 09:05:06.746040] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:31.609 [2024-07-12 09:05:06.746113] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:31.609 [2024-07-12 09:05:06.746278] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:32.543 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:32.801 "name": "raid_bdev1", 00:38:32.801 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:32.801 "strip_size_kb": 64, 00:38:32.801 "state": "online", 00:38:32.801 "raid_level": "raid5f", 00:38:32.801 "superblock": true, 00:38:32.801 "num_base_bdevs": 4, 00:38:32.801 "num_base_bdevs_discovered": 4, 00:38:32.801 "num_base_bdevs_operational": 4, 00:38:32.801 "base_bdevs_list": [ 00:38:32.801 { 00:38:32.801 "name": "spare", 00:38:32.801 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:32.801 "is_configured": true, 00:38:32.801 "data_offset": 2048, 00:38:32.801 "data_size": 63488 00:38:32.801 }, 00:38:32.801 { 00:38:32.801 "name": "BaseBdev2", 00:38:32.801 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:32.801 "is_configured": true, 00:38:32.801 "data_offset": 2048, 00:38:32.801 "data_size": 63488 00:38:32.801 }, 00:38:32.801 { 00:38:32.801 "name": "BaseBdev3", 00:38:32.801 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:32.801 "is_configured": true, 00:38:32.801 "data_offset": 2048, 00:38:32.801 "data_size": 63488 00:38:32.801 }, 00:38:32.801 { 00:38:32.801 "name": "BaseBdev4", 00:38:32.801 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:32.801 "is_configured": true, 00:38:32.801 "data_offset": 2048, 00:38:32.801 "data_size": 63488 00:38:32.801 } 00:38:32.801 ] 00:38:32.801 }' 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.801 09:05:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:33.059 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:33.059 "name": "raid_bdev1", 00:38:33.059 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:33.059 "strip_size_kb": 64, 00:38:33.059 "state": "online", 00:38:33.059 "raid_level": "raid5f", 00:38:33.059 "superblock": true, 00:38:33.059 "num_base_bdevs": 4, 00:38:33.059 "num_base_bdevs_discovered": 4, 00:38:33.059 "num_base_bdevs_operational": 4, 00:38:33.059 "base_bdevs_list": [ 00:38:33.059 { 00:38:33.059 "name": "spare", 00:38:33.059 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:33.059 "is_configured": true, 00:38:33.059 "data_offset": 2048, 00:38:33.059 "data_size": 63488 00:38:33.059 }, 00:38:33.059 { 00:38:33.059 "name": "BaseBdev2", 00:38:33.059 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:33.059 "is_configured": true, 00:38:33.059 "data_offset": 2048, 00:38:33.059 "data_size": 63488 00:38:33.059 }, 00:38:33.059 { 00:38:33.059 "name": "BaseBdev3", 00:38:33.059 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:33.059 "is_configured": true, 00:38:33.059 "data_offset": 2048, 00:38:33.059 "data_size": 63488 00:38:33.059 }, 00:38:33.059 { 00:38:33.059 "name": "BaseBdev4", 00:38:33.059 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:33.059 "is_configured": true, 00:38:33.059 "data_offset": 2048, 00:38:33.059 "data_size": 63488 00:38:33.059 } 00:38:33.059 ] 00:38:33.059 }' 00:38:33.059 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:33.059 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:33.059 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:33.316 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:33.317 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:33.317 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:33.317 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.575 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:33.575 "name": "raid_bdev1", 00:38:33.575 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:33.575 "strip_size_kb": 64, 00:38:33.575 "state": "online", 00:38:33.575 "raid_level": "raid5f", 00:38:33.575 "superblock": true, 00:38:33.575 "num_base_bdevs": 4, 00:38:33.575 "num_base_bdevs_discovered": 4, 00:38:33.575 "num_base_bdevs_operational": 4, 00:38:33.575 "base_bdevs_list": [ 00:38:33.575 { 00:38:33.575 "name": "spare", 00:38:33.575 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:33.575 "is_configured": true, 00:38:33.575 "data_offset": 2048, 00:38:33.575 "data_size": 63488 00:38:33.575 }, 00:38:33.575 { 00:38:33.575 "name": "BaseBdev2", 00:38:33.575 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:33.575 "is_configured": true, 00:38:33.575 "data_offset": 2048, 00:38:33.575 "data_size": 63488 00:38:33.575 }, 00:38:33.575 { 00:38:33.575 "name": "BaseBdev3", 00:38:33.575 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:33.575 "is_configured": true, 00:38:33.575 "data_offset": 2048, 00:38:33.575 "data_size": 63488 00:38:33.575 }, 00:38:33.575 { 00:38:33.575 "name": "BaseBdev4", 00:38:33.575 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:33.575 "is_configured": true, 00:38:33.575 "data_offset": 2048, 00:38:33.575 "data_size": 63488 00:38:33.575 } 00:38:33.575 ] 00:38:33.575 }' 00:38:33.575 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:33.575 09:05:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:34.140 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:34.399 [2024-07-12 09:05:09.438205] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:34.399 [2024-07-12 09:05:09.438245] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:34.399 [2024-07-12 09:05:09.438344] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:34.399 [2024-07-12 09:05:09.438446] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:34.399 [2024-07-12 09:05:09.438476] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:38:34.399 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:34.399 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:34.657 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:34.915 /dev/nbd0 00:38:34.915 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:34.915 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:34.916 09:05:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:34.916 1+0 records in 00:38:34.916 1+0 records out 00:38:34.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375685 s, 10.9 MB/s 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:34.916 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:38:35.174 /dev/nbd1 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:35.174 1+0 records in 00:38:35.174 1+0 records out 00:38:35.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607384 s, 6.7 MB/s 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:35.174 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:35.432 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:38:35.432 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:38:35.432 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:38:35.432 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:35.432 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:35.432 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:35.432 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:38:35.689 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:35.689 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:35.689 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:35.690 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:38:35.948 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:35.948 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:35.948 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:35.948 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:35.948 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:35.948 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:35.948 09:05:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:38:35.948 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:38:35.948 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:35.948 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:35.948 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:35.948 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:35.948 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:38:35.948 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:36.206 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:36.463 [2024-07-12 09:05:11.530114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:36.463 [2024-07-12 09:05:11.530228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:36.463 [2024-07-12 09:05:11.530281] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:38:36.463 [2024-07-12 09:05:11.530313] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:36.463 [2024-07-12 09:05:11.532614] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:36.463 [2024-07-12 09:05:11.532692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:36.463 [2024-07-12 09:05:11.532837] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:36.463 [2024-07-12 09:05:11.532944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:36.463 [2024-07-12 09:05:11.533140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:36.463 [2024-07-12 09:05:11.533310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:36.463 [2024-07-12 09:05:11.533429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:36.463 spare 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:36.463 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:36.463 [2024-07-12 09:05:11.633560] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:38:36.463 [2024-07-12 09:05:11.633622] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:38:36.463 [2024-07-12 09:05:11.633841] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004c5d0 00:38:36.463 [2024-07-12 09:05:11.639428] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:38:36.463 [2024-07-12 09:05:11.639457] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:38:36.463 [2024-07-12 09:05:11.639667] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:36.721 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:36.721 "name": "raid_bdev1", 00:38:36.721 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:36.721 "strip_size_kb": 64, 00:38:36.721 "state": "online", 00:38:36.721 "raid_level": "raid5f", 00:38:36.721 "superblock": true, 00:38:36.721 "num_base_bdevs": 4, 00:38:36.721 "num_base_bdevs_discovered": 4, 00:38:36.721 "num_base_bdevs_operational": 4, 00:38:36.721 "base_bdevs_list": [ 00:38:36.721 { 00:38:36.721 "name": "spare", 00:38:36.721 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:36.721 "is_configured": true, 00:38:36.721 "data_offset": 2048, 00:38:36.721 "data_size": 63488 00:38:36.721 }, 00:38:36.721 { 00:38:36.721 "name": "BaseBdev2", 00:38:36.721 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:36.721 "is_configured": true, 00:38:36.721 "data_offset": 2048, 00:38:36.721 "data_size": 63488 00:38:36.721 }, 00:38:36.721 { 00:38:36.721 "name": "BaseBdev3", 00:38:36.721 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:36.721 "is_configured": true, 00:38:36.721 "data_offset": 2048, 00:38:36.721 "data_size": 63488 00:38:36.721 }, 00:38:36.721 { 00:38:36.721 "name": "BaseBdev4", 00:38:36.721 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:36.721 "is_configured": true, 00:38:36.721 "data_offset": 2048, 00:38:36.721 "data_size": 63488 00:38:36.721 } 00:38:36.721 ] 00:38:36.721 }' 00:38:36.721 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:36.721 09:05:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:37.286 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:37.286 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:37.286 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:37.286 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:37.286 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:37.544 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:37.544 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.544 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:37.544 "name": "raid_bdev1", 00:38:37.544 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:37.544 "strip_size_kb": 64, 00:38:37.544 "state": "online", 00:38:37.544 "raid_level": "raid5f", 00:38:37.544 "superblock": true, 00:38:37.544 "num_base_bdevs": 4, 00:38:37.544 "num_base_bdevs_discovered": 4, 00:38:37.544 "num_base_bdevs_operational": 4, 00:38:37.544 "base_bdevs_list": [ 00:38:37.544 { 00:38:37.544 "name": "spare", 00:38:37.544 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:37.544 "is_configured": true, 00:38:37.544 "data_offset": 2048, 00:38:37.544 "data_size": 63488 00:38:37.544 }, 00:38:37.544 { 00:38:37.544 "name": "BaseBdev2", 00:38:37.544 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:37.544 "is_configured": true, 00:38:37.544 "data_offset": 2048, 00:38:37.544 "data_size": 63488 00:38:37.544 }, 00:38:37.544 { 00:38:37.544 "name": "BaseBdev3", 00:38:37.544 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:37.544 "is_configured": true, 00:38:37.544 "data_offset": 2048, 00:38:37.544 "data_size": 63488 00:38:37.544 }, 00:38:37.544 { 00:38:37.544 "name": "BaseBdev4", 00:38:37.544 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:37.544 "is_configured": true, 00:38:37.544 "data_offset": 2048, 00:38:37.544 "data_size": 63488 00:38:37.544 } 00:38:37.544 ] 00:38:37.544 }' 00:38:37.544 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:37.544 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:37.544 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:37.802 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:37.802 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:37.802 09:05:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:38.058 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:38:38.058 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:38.315 [2024-07-12 09:05:13.298457] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:38.315 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:38.316 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:38.316 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:38.316 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:38.316 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:38.572 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:38.572 "name": "raid_bdev1", 00:38:38.572 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:38.572 "strip_size_kb": 64, 00:38:38.572 "state": "online", 00:38:38.573 "raid_level": "raid5f", 00:38:38.573 "superblock": true, 00:38:38.573 "num_base_bdevs": 4, 00:38:38.573 "num_base_bdevs_discovered": 3, 00:38:38.573 "num_base_bdevs_operational": 3, 00:38:38.573 "base_bdevs_list": [ 00:38:38.573 { 00:38:38.573 "name": null, 00:38:38.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:38.573 "is_configured": false, 00:38:38.573 "data_offset": 2048, 00:38:38.573 "data_size": 63488 00:38:38.573 }, 00:38:38.573 { 00:38:38.573 "name": "BaseBdev2", 00:38:38.573 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:38.573 "is_configured": true, 00:38:38.573 "data_offset": 2048, 00:38:38.573 "data_size": 63488 00:38:38.573 }, 00:38:38.573 { 00:38:38.573 "name": "BaseBdev3", 00:38:38.573 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:38.573 "is_configured": true, 00:38:38.573 "data_offset": 2048, 00:38:38.573 "data_size": 63488 00:38:38.573 }, 00:38:38.573 { 00:38:38.573 "name": "BaseBdev4", 00:38:38.573 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:38.573 "is_configured": true, 00:38:38.573 "data_offset": 2048, 00:38:38.573 "data_size": 63488 00:38:38.573 } 00:38:38.573 ] 00:38:38.573 }' 00:38:38.573 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:38.573 09:05:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:39.138 09:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:39.396 [2024-07-12 09:05:14.366615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:39.396 [2024-07-12 09:05:14.366811] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:39.396 [2024-07-12 09:05:14.366828] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:39.396 [2024-07-12 09:05:14.366939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:39.396 [2024-07-12 09:05:14.376791] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004c770 00:38:39.396 [2024-07-12 09:05:14.383644] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:39.396 09:05:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:38:40.329 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:40.329 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:40.329 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:40.329 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:40.329 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:40.329 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:40.329 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.587 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:40.587 "name": "raid_bdev1", 00:38:40.587 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:40.587 "strip_size_kb": 64, 00:38:40.587 "state": "online", 00:38:40.587 "raid_level": "raid5f", 00:38:40.587 "superblock": true, 00:38:40.587 "num_base_bdevs": 4, 00:38:40.587 "num_base_bdevs_discovered": 4, 00:38:40.587 "num_base_bdevs_operational": 4, 00:38:40.587 "process": { 00:38:40.587 "type": "rebuild", 00:38:40.587 "target": "spare", 00:38:40.587 "progress": { 00:38:40.587 "blocks": 23040, 00:38:40.587 "percent": 12 00:38:40.587 } 00:38:40.587 }, 00:38:40.587 "base_bdevs_list": [ 00:38:40.587 { 00:38:40.587 "name": "spare", 00:38:40.587 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:40.587 "is_configured": true, 00:38:40.587 "data_offset": 2048, 00:38:40.587 "data_size": 63488 00:38:40.587 }, 00:38:40.587 { 00:38:40.587 "name": "BaseBdev2", 00:38:40.587 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:40.587 "is_configured": true, 00:38:40.587 "data_offset": 2048, 00:38:40.587 "data_size": 63488 00:38:40.587 }, 00:38:40.587 { 00:38:40.587 "name": "BaseBdev3", 00:38:40.588 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:40.588 "is_configured": true, 00:38:40.588 "data_offset": 2048, 00:38:40.588 "data_size": 63488 00:38:40.588 }, 00:38:40.588 { 00:38:40.588 "name": "BaseBdev4", 00:38:40.588 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:40.588 "is_configured": true, 00:38:40.588 "data_offset": 2048, 00:38:40.588 "data_size": 63488 00:38:40.588 } 00:38:40.588 ] 00:38:40.588 }' 00:38:40.588 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:40.588 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:40.588 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:40.588 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:40.588 09:05:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:40.847 [2024-07-12 09:05:15.929191] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:40.847 [2024-07-12 09:05:15.995371] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:40.847 [2024-07-12 09:05:15.995491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:40.847 [2024-07-12 09:05:15.995526] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:40.847 [2024-07-12 09:05:15.995536] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:40.847 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:41.106 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:41.106 "name": "raid_bdev1", 00:38:41.106 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:41.106 "strip_size_kb": 64, 00:38:41.106 "state": "online", 00:38:41.106 "raid_level": "raid5f", 00:38:41.106 "superblock": true, 00:38:41.106 "num_base_bdevs": 4, 00:38:41.106 "num_base_bdevs_discovered": 3, 00:38:41.106 "num_base_bdevs_operational": 3, 00:38:41.106 "base_bdevs_list": [ 00:38:41.106 { 00:38:41.106 "name": null, 00:38:41.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:41.106 "is_configured": false, 00:38:41.106 "data_offset": 2048, 00:38:41.106 "data_size": 63488 00:38:41.106 }, 00:38:41.106 { 00:38:41.106 "name": "BaseBdev2", 00:38:41.106 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:41.106 "is_configured": true, 00:38:41.106 "data_offset": 2048, 00:38:41.106 "data_size": 63488 00:38:41.106 }, 00:38:41.106 { 00:38:41.106 "name": "BaseBdev3", 00:38:41.106 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:41.106 "is_configured": true, 00:38:41.106 "data_offset": 2048, 00:38:41.106 "data_size": 63488 00:38:41.106 }, 00:38:41.106 { 00:38:41.106 "name": "BaseBdev4", 00:38:41.106 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:41.106 "is_configured": true, 00:38:41.106 "data_offset": 2048, 00:38:41.106 "data_size": 63488 00:38:41.106 } 00:38:41.106 ] 00:38:41.106 }' 00:38:41.106 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:41.106 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:42.041 09:05:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:42.041 [2024-07-12 09:05:17.149519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:42.041 [2024-07-12 09:05:17.149619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:42.041 [2024-07-12 09:05:17.149665] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:38:42.041 [2024-07-12 09:05:17.149689] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:42.041 [2024-07-12 09:05:17.150338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:42.041 [2024-07-12 09:05:17.150417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:42.041 [2024-07-12 09:05:17.150560] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:42.041 [2024-07-12 09:05:17.150578] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:42.041 [2024-07-12 09:05:17.150588] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:42.041 [2024-07-12 09:05:17.150641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:42.041 [2024-07-12 09:05:17.161289] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004cab0 00:38:42.041 spare 00:38:42.041 [2024-07-12 09:05:17.167831] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:42.041 09:05:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:38:43.417 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:43.417 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:43.417 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:43.417 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:43.417 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:43.418 "name": "raid_bdev1", 00:38:43.418 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:43.418 "strip_size_kb": 64, 00:38:43.418 "state": "online", 00:38:43.418 "raid_level": "raid5f", 00:38:43.418 "superblock": true, 00:38:43.418 "num_base_bdevs": 4, 00:38:43.418 "num_base_bdevs_discovered": 4, 00:38:43.418 "num_base_bdevs_operational": 4, 00:38:43.418 "process": { 00:38:43.418 "type": "rebuild", 00:38:43.418 "target": "spare", 00:38:43.418 "progress": { 00:38:43.418 "blocks": 21120, 00:38:43.418 "percent": 11 00:38:43.418 } 00:38:43.418 }, 00:38:43.418 "base_bdevs_list": [ 00:38:43.418 { 00:38:43.418 "name": "spare", 00:38:43.418 "uuid": "03739758-bd10-50e4-9b2c-5b26e15b46bf", 00:38:43.418 "is_configured": true, 00:38:43.418 "data_offset": 2048, 00:38:43.418 "data_size": 63488 00:38:43.418 }, 00:38:43.418 { 00:38:43.418 "name": "BaseBdev2", 00:38:43.418 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:43.418 "is_configured": true, 00:38:43.418 "data_offset": 2048, 00:38:43.418 "data_size": 63488 00:38:43.418 }, 00:38:43.418 { 00:38:43.418 "name": "BaseBdev3", 00:38:43.418 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:43.418 "is_configured": true, 00:38:43.418 "data_offset": 2048, 00:38:43.418 "data_size": 63488 00:38:43.418 }, 00:38:43.418 { 00:38:43.418 "name": "BaseBdev4", 00:38:43.418 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:43.418 "is_configured": true, 00:38:43.418 "data_offset": 2048, 00:38:43.418 "data_size": 63488 00:38:43.418 } 00:38:43.418 ] 00:38:43.418 }' 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:43.418 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:43.677 [2024-07-12 09:05:18.729144] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:43.677 [2024-07-12 09:05:18.779669] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:43.677 [2024-07-12 09:05:18.779764] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:43.677 [2024-07-12 09:05:18.779786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:43.677 [2024-07-12 09:05:18.779796] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:43.677 09:05:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.935 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:43.935 "name": "raid_bdev1", 00:38:43.935 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:43.935 "strip_size_kb": 64, 00:38:43.935 "state": "online", 00:38:43.935 "raid_level": "raid5f", 00:38:43.935 "superblock": true, 00:38:43.935 "num_base_bdevs": 4, 00:38:43.935 "num_base_bdevs_discovered": 3, 00:38:43.935 "num_base_bdevs_operational": 3, 00:38:43.935 "base_bdevs_list": [ 00:38:43.935 { 00:38:43.935 "name": null, 00:38:43.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.935 "is_configured": false, 00:38:43.935 "data_offset": 2048, 00:38:43.935 "data_size": 63488 00:38:43.935 }, 00:38:43.935 { 00:38:43.935 "name": "BaseBdev2", 00:38:43.935 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:43.935 "is_configured": true, 00:38:43.935 "data_offset": 2048, 00:38:43.935 "data_size": 63488 00:38:43.935 }, 00:38:43.935 { 00:38:43.935 "name": "BaseBdev3", 00:38:43.935 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:43.935 "is_configured": true, 00:38:43.935 "data_offset": 2048, 00:38:43.935 "data_size": 63488 00:38:43.935 }, 00:38:43.935 { 00:38:43.935 "name": "BaseBdev4", 00:38:43.935 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:43.935 "is_configured": true, 00:38:43.935 "data_offset": 2048, 00:38:43.935 "data_size": 63488 00:38:43.935 } 00:38:43.935 ] 00:38:43.935 }' 00:38:43.935 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:43.935 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:44.869 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:44.869 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:44.869 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:44.869 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:44.869 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:44.869 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:44.869 09:05:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:44.869 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:44.869 "name": "raid_bdev1", 00:38:44.869 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:44.869 "strip_size_kb": 64, 00:38:44.869 "state": "online", 00:38:44.869 "raid_level": "raid5f", 00:38:44.869 "superblock": true, 00:38:44.869 "num_base_bdevs": 4, 00:38:44.869 "num_base_bdevs_discovered": 3, 00:38:44.869 "num_base_bdevs_operational": 3, 00:38:44.869 "base_bdevs_list": [ 00:38:44.869 { 00:38:44.869 "name": null, 00:38:44.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.869 "is_configured": false, 00:38:44.869 "data_offset": 2048, 00:38:44.869 "data_size": 63488 00:38:44.869 }, 00:38:44.869 { 00:38:44.869 "name": "BaseBdev2", 00:38:44.869 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:44.869 "is_configured": true, 00:38:44.869 "data_offset": 2048, 00:38:44.869 "data_size": 63488 00:38:44.869 }, 00:38:44.869 { 00:38:44.869 "name": "BaseBdev3", 00:38:44.869 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:44.869 "is_configured": true, 00:38:44.869 "data_offset": 2048, 00:38:44.869 "data_size": 63488 00:38:44.869 }, 00:38:44.869 { 00:38:44.869 "name": "BaseBdev4", 00:38:44.869 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:44.869 "is_configured": true, 00:38:44.869 "data_offset": 2048, 00:38:44.869 "data_size": 63488 00:38:44.869 } 00:38:44.869 ] 00:38:44.869 }' 00:38:44.869 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:45.127 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:45.127 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:45.127 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:45.127 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:38:45.386 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:45.644 [2024-07-12 09:05:20.610490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:45.644 [2024-07-12 09:05:20.610608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:45.644 [2024-07-12 09:05:20.610664] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:38:45.644 [2024-07-12 09:05:20.610688] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:45.644 [2024-07-12 09:05:20.611333] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:45.644 [2024-07-12 09:05:20.611424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:45.644 [2024-07-12 09:05:20.611550] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:45.644 [2024-07-12 09:05:20.611569] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:45.644 [2024-07-12 09:05:20.611577] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:45.644 BaseBdev1 00:38:45.644 09:05:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:46.579 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:46.839 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:46.839 "name": "raid_bdev1", 00:38:46.839 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:46.839 "strip_size_kb": 64, 00:38:46.839 "state": "online", 00:38:46.839 "raid_level": "raid5f", 00:38:46.839 "superblock": true, 00:38:46.839 "num_base_bdevs": 4, 00:38:46.839 "num_base_bdevs_discovered": 3, 00:38:46.839 "num_base_bdevs_operational": 3, 00:38:46.839 "base_bdevs_list": [ 00:38:46.839 { 00:38:46.839 "name": null, 00:38:46.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.839 "is_configured": false, 00:38:46.839 "data_offset": 2048, 00:38:46.839 "data_size": 63488 00:38:46.839 }, 00:38:46.839 { 00:38:46.839 "name": "BaseBdev2", 00:38:46.839 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:46.839 "is_configured": true, 00:38:46.839 "data_offset": 2048, 00:38:46.839 "data_size": 63488 00:38:46.839 }, 00:38:46.839 { 00:38:46.839 "name": "BaseBdev3", 00:38:46.839 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:46.839 "is_configured": true, 00:38:46.839 "data_offset": 2048, 00:38:46.839 "data_size": 63488 00:38:46.839 }, 00:38:46.839 { 00:38:46.839 "name": "BaseBdev4", 00:38:46.839 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:46.839 "is_configured": true, 00:38:46.839 "data_offset": 2048, 00:38:46.839 "data_size": 63488 00:38:46.839 } 00:38:46.839 ] 00:38:46.839 }' 00:38:46.839 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:46.839 09:05:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:47.412 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:47.412 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:47.412 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:47.412 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:47.412 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:47.412 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:47.412 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:47.671 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:47.671 "name": "raid_bdev1", 00:38:47.671 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:47.671 "strip_size_kb": 64, 00:38:47.671 "state": "online", 00:38:47.671 "raid_level": "raid5f", 00:38:47.671 "superblock": true, 00:38:47.671 "num_base_bdevs": 4, 00:38:47.671 "num_base_bdevs_discovered": 3, 00:38:47.671 "num_base_bdevs_operational": 3, 00:38:47.671 "base_bdevs_list": [ 00:38:47.671 { 00:38:47.671 "name": null, 00:38:47.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.671 "is_configured": false, 00:38:47.671 "data_offset": 2048, 00:38:47.671 "data_size": 63488 00:38:47.671 }, 00:38:47.671 { 00:38:47.671 "name": "BaseBdev2", 00:38:47.671 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:47.671 "is_configured": true, 00:38:47.671 "data_offset": 2048, 00:38:47.671 "data_size": 63488 00:38:47.671 }, 00:38:47.671 { 00:38:47.671 "name": "BaseBdev3", 00:38:47.671 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:47.671 "is_configured": true, 00:38:47.671 "data_offset": 2048, 00:38:47.671 "data_size": 63488 00:38:47.671 }, 00:38:47.671 { 00:38:47.671 "name": "BaseBdev4", 00:38:47.671 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:47.671 "is_configured": true, 00:38:47.671 "data_offset": 2048, 00:38:47.671 "data_size": 63488 00:38:47.671 } 00:38:47.671 ] 00:38:47.671 }' 00:38:47.671 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:47.930 09:05:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:48.189 [2024-07-12 09:05:23.131144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:48.189 [2024-07-12 09:05:23.131342] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:48.189 [2024-07-12 09:05:23.131358] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:48.189 request: 00:38:48.189 { 00:38:48.189 "base_bdev": "BaseBdev1", 00:38:48.189 "raid_bdev": "raid_bdev1", 00:38:48.189 "method": "bdev_raid_add_base_bdev", 00:38:48.189 "req_id": 1 00:38:48.189 } 00:38:48.189 Got JSON-RPC error response 00:38:48.189 response: 00:38:48.189 { 00:38:48.189 "code": -22, 00:38:48.189 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:48.189 } 00:38:48.189 09:05:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:38:48.189 09:05:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:48.189 09:05:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:48.189 09:05:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:48.189 09:05:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:49.123 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:49.382 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:49.382 "name": "raid_bdev1", 00:38:49.382 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:49.382 "strip_size_kb": 64, 00:38:49.382 "state": "online", 00:38:49.382 "raid_level": "raid5f", 00:38:49.382 "superblock": true, 00:38:49.382 "num_base_bdevs": 4, 00:38:49.382 "num_base_bdevs_discovered": 3, 00:38:49.382 "num_base_bdevs_operational": 3, 00:38:49.382 "base_bdevs_list": [ 00:38:49.382 { 00:38:49.382 "name": null, 00:38:49.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.382 "is_configured": false, 00:38:49.382 "data_offset": 2048, 00:38:49.382 "data_size": 63488 00:38:49.382 }, 00:38:49.382 { 00:38:49.382 "name": "BaseBdev2", 00:38:49.382 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:49.382 "is_configured": true, 00:38:49.382 "data_offset": 2048, 00:38:49.382 "data_size": 63488 00:38:49.382 }, 00:38:49.382 { 00:38:49.382 "name": "BaseBdev3", 00:38:49.382 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:49.382 "is_configured": true, 00:38:49.382 "data_offset": 2048, 00:38:49.382 "data_size": 63488 00:38:49.382 }, 00:38:49.382 { 00:38:49.382 "name": "BaseBdev4", 00:38:49.382 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:49.382 "is_configured": true, 00:38:49.382 "data_offset": 2048, 00:38:49.382 "data_size": 63488 00:38:49.382 } 00:38:49.382 ] 00:38:49.382 }' 00:38:49.382 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:49.382 09:05:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:49.950 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:49.950 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:49.950 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:49.950 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:49.950 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:49.950 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:49.950 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:50.208 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:50.208 "name": "raid_bdev1", 00:38:50.208 "uuid": "2a4049c4-9df9-455a-9d7a-c5df2d739701", 00:38:50.208 "strip_size_kb": 64, 00:38:50.208 "state": "online", 00:38:50.208 "raid_level": "raid5f", 00:38:50.208 "superblock": true, 00:38:50.208 "num_base_bdevs": 4, 00:38:50.208 "num_base_bdevs_discovered": 3, 00:38:50.208 "num_base_bdevs_operational": 3, 00:38:50.208 "base_bdevs_list": [ 00:38:50.208 { 00:38:50.208 "name": null, 00:38:50.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:50.208 "is_configured": false, 00:38:50.208 "data_offset": 2048, 00:38:50.208 "data_size": 63488 00:38:50.208 }, 00:38:50.208 { 00:38:50.208 "name": "BaseBdev2", 00:38:50.208 "uuid": "8c458e7d-5b43-5466-9d40-0d72fcb4235c", 00:38:50.208 "is_configured": true, 00:38:50.208 "data_offset": 2048, 00:38:50.208 "data_size": 63488 00:38:50.208 }, 00:38:50.208 { 00:38:50.208 "name": "BaseBdev3", 00:38:50.208 "uuid": "39a45745-5d25-524d-ad0c-5111c90a8fd8", 00:38:50.208 "is_configured": true, 00:38:50.208 "data_offset": 2048, 00:38:50.208 "data_size": 63488 00:38:50.208 }, 00:38:50.208 { 00:38:50.208 "name": "BaseBdev4", 00:38:50.208 "uuid": "fe14cf83-bc08-5db0-b64b-094e1588eac3", 00:38:50.208 "is_configured": true, 00:38:50.208 "data_offset": 2048, 00:38:50.208 "data_size": 63488 00:38:50.208 } 00:38:50.208 ] 00:38:50.208 }' 00:38:50.208 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:50.208 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:50.208 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 160843 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 160843 ']' 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 160843 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160843 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160843' 00:38:50.467 killing process with pid 160843 00:38:50.467 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 160843 00:38:50.467 Received shutdown signal, test time was about 60.000000 seconds 00:38:50.467 00:38:50.467 Latency(us) 00:38:50.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.467 =================================================================================================================== 00:38:50.467 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:50.468 09:05:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 160843 00:38:50.468 [2024-07-12 09:05:25.480073] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:50.468 [2024-07-12 09:05:25.480202] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:50.468 [2024-07-12 09:05:25.480346] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:50.468 [2024-07-12 09:05:25.480372] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:38:50.727 [2024-07-12 09:05:25.802033] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:51.664 ************************************ 00:38:51.664 END TEST raid5f_rebuild_test_sb 00:38:51.664 ************************************ 00:38:51.664 09:05:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:38:51.664 00:38:51.664 real 0m40.670s 00:38:51.664 user 1m3.415s 00:38:51.664 sys 0m3.790s 00:38:51.664 09:05:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:51.664 09:05:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:51.664 09:05:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:51.664 09:05:26 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:38:51.664 09:05:26 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:38:51.664 09:05:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:38:51.664 09:05:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:51.664 09:05:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:51.664 ************************************ 00:38:51.664 START TEST raid_state_function_test_sb_4k 00:38:51.664 ************************************ 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=161925 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 161925' 00:38:51.664 Process raid pid: 161925 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 161925 /var/tmp/spdk-raid.sock 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 161925 ']' 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:51.664 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:51.665 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:51.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:51.665 09:05:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:51.665 [2024-07-12 09:05:26.844455] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:38:51.665 [2024-07-12 09:05:26.844659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.923 [2024-07-12 09:05:27.002817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.182 [2024-07-12 09:05:27.219877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.441 [2024-07-12 09:05:27.412520] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:52.702 09:05:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:52.702 09:05:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:38:52.702 09:05:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:52.960 [2024-07-12 09:05:28.053220] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:52.960 [2024-07-12 09:05:28.053325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:52.960 [2024-07-12 09:05:28.053360] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:52.960 [2024-07-12 09:05:28.053391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:52.961 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:53.219 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:53.219 "name": "Existed_Raid", 00:38:53.219 "uuid": "51b9906a-9749-470f-b09a-83a81795cada", 00:38:53.219 "strip_size_kb": 0, 00:38:53.219 "state": "configuring", 00:38:53.219 "raid_level": "raid1", 00:38:53.219 "superblock": true, 00:38:53.219 "num_base_bdevs": 2, 00:38:53.219 "num_base_bdevs_discovered": 0, 00:38:53.219 "num_base_bdevs_operational": 2, 00:38:53.219 "base_bdevs_list": [ 00:38:53.219 { 00:38:53.219 "name": "BaseBdev1", 00:38:53.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:53.219 "is_configured": false, 00:38:53.219 "data_offset": 0, 00:38:53.219 "data_size": 0 00:38:53.219 }, 00:38:53.219 { 00:38:53.219 "name": "BaseBdev2", 00:38:53.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:53.219 "is_configured": false, 00:38:53.219 "data_offset": 0, 00:38:53.219 "data_size": 0 00:38:53.219 } 00:38:53.219 ] 00:38:53.219 }' 00:38:53.219 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:53.219 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:53.786 09:05:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:54.042 [2024-07-12 09:05:29.205368] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:54.042 [2024-07-12 09:05:29.205429] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:38:54.042 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:54.299 [2024-07-12 09:05:29.461399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:54.299 [2024-07-12 09:05:29.461496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:54.299 [2024-07-12 09:05:29.461525] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:54.299 [2024-07-12 09:05:29.461553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:54.299 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:38:54.556 [2024-07-12 09:05:29.682661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:54.556 BaseBdev1 00:38:54.556 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:38:54.556 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:38:54.556 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:54.556 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:38:54.556 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:54.556 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:54.556 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:54.815 09:05:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:55.073 [ 00:38:55.073 { 00:38:55.073 "name": "BaseBdev1", 00:38:55.073 "aliases": [ 00:38:55.073 "966e19d3-178d-4008-9455-c2a3c6beba0c" 00:38:55.073 ], 00:38:55.073 "product_name": "Malloc disk", 00:38:55.073 "block_size": 4096, 00:38:55.073 "num_blocks": 8192, 00:38:55.073 "uuid": "966e19d3-178d-4008-9455-c2a3c6beba0c", 00:38:55.073 "assigned_rate_limits": { 00:38:55.073 "rw_ios_per_sec": 0, 00:38:55.073 "rw_mbytes_per_sec": 0, 00:38:55.073 "r_mbytes_per_sec": 0, 00:38:55.073 "w_mbytes_per_sec": 0 00:38:55.073 }, 00:38:55.073 "claimed": true, 00:38:55.073 "claim_type": "exclusive_write", 00:38:55.073 "zoned": false, 00:38:55.073 "supported_io_types": { 00:38:55.073 "read": true, 00:38:55.073 "write": true, 00:38:55.073 "unmap": true, 00:38:55.073 "flush": true, 00:38:55.073 "reset": true, 00:38:55.073 "nvme_admin": false, 00:38:55.073 "nvme_io": false, 00:38:55.073 "nvme_io_md": false, 00:38:55.073 "write_zeroes": true, 00:38:55.073 "zcopy": true, 00:38:55.073 "get_zone_info": false, 00:38:55.073 "zone_management": false, 00:38:55.073 "zone_append": false, 00:38:55.073 "compare": false, 00:38:55.073 "compare_and_write": false, 00:38:55.073 "abort": true, 00:38:55.073 "seek_hole": false, 00:38:55.073 "seek_data": false, 00:38:55.073 "copy": true, 00:38:55.073 "nvme_iov_md": false 00:38:55.073 }, 00:38:55.073 "memory_domains": [ 00:38:55.073 { 00:38:55.073 "dma_device_id": "system", 00:38:55.073 "dma_device_type": 1 00:38:55.073 }, 00:38:55.073 { 00:38:55.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:55.073 "dma_device_type": 2 00:38:55.073 } 00:38:55.073 ], 00:38:55.073 "driver_specific": {} 00:38:55.073 } 00:38:55.073 ] 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.073 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:55.332 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:55.332 "name": "Existed_Raid", 00:38:55.332 "uuid": "d5c8a026-0ae2-4b59-b4f4-7527ba22c379", 00:38:55.332 "strip_size_kb": 0, 00:38:55.332 "state": "configuring", 00:38:55.332 "raid_level": "raid1", 00:38:55.332 "superblock": true, 00:38:55.332 "num_base_bdevs": 2, 00:38:55.332 "num_base_bdevs_discovered": 1, 00:38:55.332 "num_base_bdevs_operational": 2, 00:38:55.332 "base_bdevs_list": [ 00:38:55.332 { 00:38:55.332 "name": "BaseBdev1", 00:38:55.332 "uuid": "966e19d3-178d-4008-9455-c2a3c6beba0c", 00:38:55.332 "is_configured": true, 00:38:55.332 "data_offset": 256, 00:38:55.332 "data_size": 7936 00:38:55.332 }, 00:38:55.332 { 00:38:55.332 "name": "BaseBdev2", 00:38:55.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.332 "is_configured": false, 00:38:55.332 "data_offset": 0, 00:38:55.332 "data_size": 0 00:38:55.332 } 00:38:55.332 ] 00:38:55.332 }' 00:38:55.332 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:55.332 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:55.898 09:05:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:56.154 [2024-07-12 09:05:31.226948] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:56.155 [2024-07-12 09:05:31.227004] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:38:56.155 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:56.412 [2024-07-12 09:05:31.419013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:56.412 [2024-07-12 09:05:31.420668] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:56.412 [2024-07-12 09:05:31.420727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:56.412 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:56.670 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:56.670 "name": "Existed_Raid", 00:38:56.670 "uuid": "849f4287-ff05-4d59-93fa-0ea3953e4d7a", 00:38:56.670 "strip_size_kb": 0, 00:38:56.670 "state": "configuring", 00:38:56.670 "raid_level": "raid1", 00:38:56.670 "superblock": true, 00:38:56.670 "num_base_bdevs": 2, 00:38:56.670 "num_base_bdevs_discovered": 1, 00:38:56.670 "num_base_bdevs_operational": 2, 00:38:56.670 "base_bdevs_list": [ 00:38:56.670 { 00:38:56.670 "name": "BaseBdev1", 00:38:56.670 "uuid": "966e19d3-178d-4008-9455-c2a3c6beba0c", 00:38:56.670 "is_configured": true, 00:38:56.670 "data_offset": 256, 00:38:56.670 "data_size": 7936 00:38:56.670 }, 00:38:56.670 { 00:38:56.670 "name": "BaseBdev2", 00:38:56.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.670 "is_configured": false, 00:38:56.670 "data_offset": 0, 00:38:56.670 "data_size": 0 00:38:56.670 } 00:38:56.670 ] 00:38:56.670 }' 00:38:56.670 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:56.670 09:05:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:57.236 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:38:57.494 [2024-07-12 09:05:32.617322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:57.494 [2024-07-12 09:05:32.617768] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:38:57.494 [2024-07-12 09:05:32.617803] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:57.494 [2024-07-12 09:05:32.617942] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:38:57.494 BaseBdev2 00:38:57.494 [2024-07-12 09:05:32.618346] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:38:57.494 [2024-07-12 09:05:32.618375] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:38:57.494 [2024-07-12 09:05:32.618598] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:57.494 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:38:57.494 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:38:57.494 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:57.494 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:38:57.494 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:57.494 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:57.494 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:57.760 09:05:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:58.038 [ 00:38:58.038 { 00:38:58.038 "name": "BaseBdev2", 00:38:58.038 "aliases": [ 00:38:58.038 "67ca2a6e-8397-40d6-bfd1-38416d727456" 00:38:58.038 ], 00:38:58.038 "product_name": "Malloc disk", 00:38:58.038 "block_size": 4096, 00:38:58.038 "num_blocks": 8192, 00:38:58.038 "uuid": "67ca2a6e-8397-40d6-bfd1-38416d727456", 00:38:58.038 "assigned_rate_limits": { 00:38:58.038 "rw_ios_per_sec": 0, 00:38:58.038 "rw_mbytes_per_sec": 0, 00:38:58.038 "r_mbytes_per_sec": 0, 00:38:58.038 "w_mbytes_per_sec": 0 00:38:58.038 }, 00:38:58.038 "claimed": true, 00:38:58.038 "claim_type": "exclusive_write", 00:38:58.038 "zoned": false, 00:38:58.038 "supported_io_types": { 00:38:58.038 "read": true, 00:38:58.038 "write": true, 00:38:58.038 "unmap": true, 00:38:58.038 "flush": true, 00:38:58.038 "reset": true, 00:38:58.038 "nvme_admin": false, 00:38:58.038 "nvme_io": false, 00:38:58.038 "nvme_io_md": false, 00:38:58.038 "write_zeroes": true, 00:38:58.038 "zcopy": true, 00:38:58.038 "get_zone_info": false, 00:38:58.038 "zone_management": false, 00:38:58.038 "zone_append": false, 00:38:58.038 "compare": false, 00:38:58.038 "compare_and_write": false, 00:38:58.038 "abort": true, 00:38:58.038 "seek_hole": false, 00:38:58.038 "seek_data": false, 00:38:58.038 "copy": true, 00:38:58.038 "nvme_iov_md": false 00:38:58.038 }, 00:38:58.038 "memory_domains": [ 00:38:58.038 { 00:38:58.038 "dma_device_id": "system", 00:38:58.038 "dma_device_type": 1 00:38:58.038 }, 00:38:58.038 { 00:38:58.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:58.038 "dma_device_type": 2 00:38:58.038 } 00:38:58.038 ], 00:38:58.038 "driver_specific": {} 00:38:58.038 } 00:38:58.038 ] 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:58.038 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:58.295 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:58.295 "name": "Existed_Raid", 00:38:58.295 "uuid": "849f4287-ff05-4d59-93fa-0ea3953e4d7a", 00:38:58.295 "strip_size_kb": 0, 00:38:58.295 "state": "online", 00:38:58.295 "raid_level": "raid1", 00:38:58.295 "superblock": true, 00:38:58.295 "num_base_bdevs": 2, 00:38:58.295 "num_base_bdevs_discovered": 2, 00:38:58.295 "num_base_bdevs_operational": 2, 00:38:58.295 "base_bdevs_list": [ 00:38:58.295 { 00:38:58.295 "name": "BaseBdev1", 00:38:58.295 "uuid": "966e19d3-178d-4008-9455-c2a3c6beba0c", 00:38:58.295 "is_configured": true, 00:38:58.295 "data_offset": 256, 00:38:58.295 "data_size": 7936 00:38:58.295 }, 00:38:58.295 { 00:38:58.295 "name": "BaseBdev2", 00:38:58.295 "uuid": "67ca2a6e-8397-40d6-bfd1-38416d727456", 00:38:58.295 "is_configured": true, 00:38:58.295 "data_offset": 256, 00:38:58.295 "data_size": 7936 00:38:58.295 } 00:38:58.295 ] 00:38:58.295 }' 00:38:58.295 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:58.295 09:05:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:38:58.859 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:59.115 [2024-07-12 09:05:34.226454] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:59.115 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:59.115 "name": "Existed_Raid", 00:38:59.115 "aliases": [ 00:38:59.115 "849f4287-ff05-4d59-93fa-0ea3953e4d7a" 00:38:59.115 ], 00:38:59.115 "product_name": "Raid Volume", 00:38:59.115 "block_size": 4096, 00:38:59.115 "num_blocks": 7936, 00:38:59.115 "uuid": "849f4287-ff05-4d59-93fa-0ea3953e4d7a", 00:38:59.115 "assigned_rate_limits": { 00:38:59.115 "rw_ios_per_sec": 0, 00:38:59.115 "rw_mbytes_per_sec": 0, 00:38:59.115 "r_mbytes_per_sec": 0, 00:38:59.115 "w_mbytes_per_sec": 0 00:38:59.115 }, 00:38:59.115 "claimed": false, 00:38:59.115 "zoned": false, 00:38:59.115 "supported_io_types": { 00:38:59.115 "read": true, 00:38:59.116 "write": true, 00:38:59.116 "unmap": false, 00:38:59.116 "flush": false, 00:38:59.116 "reset": true, 00:38:59.116 "nvme_admin": false, 00:38:59.116 "nvme_io": false, 00:38:59.116 "nvme_io_md": false, 00:38:59.116 "write_zeroes": true, 00:38:59.116 "zcopy": false, 00:38:59.116 "get_zone_info": false, 00:38:59.116 "zone_management": false, 00:38:59.116 "zone_append": false, 00:38:59.116 "compare": false, 00:38:59.116 "compare_and_write": false, 00:38:59.116 "abort": false, 00:38:59.116 "seek_hole": false, 00:38:59.116 "seek_data": false, 00:38:59.116 "copy": false, 00:38:59.116 "nvme_iov_md": false 00:38:59.116 }, 00:38:59.116 "memory_domains": [ 00:38:59.116 { 00:38:59.116 "dma_device_id": "system", 00:38:59.116 "dma_device_type": 1 00:38:59.116 }, 00:38:59.116 { 00:38:59.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:59.116 "dma_device_type": 2 00:38:59.116 }, 00:38:59.116 { 00:38:59.116 "dma_device_id": "system", 00:38:59.116 "dma_device_type": 1 00:38:59.116 }, 00:38:59.116 { 00:38:59.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:59.116 "dma_device_type": 2 00:38:59.116 } 00:38:59.116 ], 00:38:59.116 "driver_specific": { 00:38:59.116 "raid": { 00:38:59.116 "uuid": "849f4287-ff05-4d59-93fa-0ea3953e4d7a", 00:38:59.116 "strip_size_kb": 0, 00:38:59.116 "state": "online", 00:38:59.116 "raid_level": "raid1", 00:38:59.116 "superblock": true, 00:38:59.116 "num_base_bdevs": 2, 00:38:59.116 "num_base_bdevs_discovered": 2, 00:38:59.116 "num_base_bdevs_operational": 2, 00:38:59.116 "base_bdevs_list": [ 00:38:59.116 { 00:38:59.116 "name": "BaseBdev1", 00:38:59.116 "uuid": "966e19d3-178d-4008-9455-c2a3c6beba0c", 00:38:59.116 "is_configured": true, 00:38:59.116 "data_offset": 256, 00:38:59.116 "data_size": 7936 00:38:59.116 }, 00:38:59.116 { 00:38:59.116 "name": "BaseBdev2", 00:38:59.116 "uuid": "67ca2a6e-8397-40d6-bfd1-38416d727456", 00:38:59.116 "is_configured": true, 00:38:59.116 "data_offset": 256, 00:38:59.116 "data_size": 7936 00:38:59.116 } 00:38:59.116 ] 00:38:59.116 } 00:38:59.116 } 00:38:59.116 }' 00:38:59.116 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:59.116 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:38:59.116 BaseBdev2' 00:38:59.116 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:59.116 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:38:59.116 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:59.373 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:59.373 "name": "BaseBdev1", 00:38:59.373 "aliases": [ 00:38:59.373 "966e19d3-178d-4008-9455-c2a3c6beba0c" 00:38:59.373 ], 00:38:59.373 "product_name": "Malloc disk", 00:38:59.373 "block_size": 4096, 00:38:59.373 "num_blocks": 8192, 00:38:59.373 "uuid": "966e19d3-178d-4008-9455-c2a3c6beba0c", 00:38:59.373 "assigned_rate_limits": { 00:38:59.373 "rw_ios_per_sec": 0, 00:38:59.373 "rw_mbytes_per_sec": 0, 00:38:59.373 "r_mbytes_per_sec": 0, 00:38:59.373 "w_mbytes_per_sec": 0 00:38:59.373 }, 00:38:59.373 "claimed": true, 00:38:59.373 "claim_type": "exclusive_write", 00:38:59.373 "zoned": false, 00:38:59.373 "supported_io_types": { 00:38:59.373 "read": true, 00:38:59.373 "write": true, 00:38:59.373 "unmap": true, 00:38:59.373 "flush": true, 00:38:59.373 "reset": true, 00:38:59.373 "nvme_admin": false, 00:38:59.373 "nvme_io": false, 00:38:59.373 "nvme_io_md": false, 00:38:59.373 "write_zeroes": true, 00:38:59.373 "zcopy": true, 00:38:59.373 "get_zone_info": false, 00:38:59.373 "zone_management": false, 00:38:59.373 "zone_append": false, 00:38:59.373 "compare": false, 00:38:59.373 "compare_and_write": false, 00:38:59.373 "abort": true, 00:38:59.373 "seek_hole": false, 00:38:59.373 "seek_data": false, 00:38:59.373 "copy": true, 00:38:59.373 "nvme_iov_md": false 00:38:59.373 }, 00:38:59.373 "memory_domains": [ 00:38:59.373 { 00:38:59.373 "dma_device_id": "system", 00:38:59.373 "dma_device_type": 1 00:38:59.373 }, 00:38:59.373 { 00:38:59.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:59.373 "dma_device_type": 2 00:38:59.373 } 00:38:59.373 ], 00:38:59.373 "driver_specific": {} 00:38:59.373 }' 00:38:59.373 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:59.373 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:59.631 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:38:59.631 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:59.631 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:59.631 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:38:59.631 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:59.631 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:59.888 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:38:59.888 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:59.888 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:59.888 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:38:59.888 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:59.888 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:38:59.888 09:05:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:00.146 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:00.146 "name": "BaseBdev2", 00:39:00.146 "aliases": [ 00:39:00.146 "67ca2a6e-8397-40d6-bfd1-38416d727456" 00:39:00.146 ], 00:39:00.146 "product_name": "Malloc disk", 00:39:00.146 "block_size": 4096, 00:39:00.146 "num_blocks": 8192, 00:39:00.146 "uuid": "67ca2a6e-8397-40d6-bfd1-38416d727456", 00:39:00.146 "assigned_rate_limits": { 00:39:00.146 "rw_ios_per_sec": 0, 00:39:00.146 "rw_mbytes_per_sec": 0, 00:39:00.146 "r_mbytes_per_sec": 0, 00:39:00.146 "w_mbytes_per_sec": 0 00:39:00.146 }, 00:39:00.146 "claimed": true, 00:39:00.146 "claim_type": "exclusive_write", 00:39:00.146 "zoned": false, 00:39:00.146 "supported_io_types": { 00:39:00.146 "read": true, 00:39:00.146 "write": true, 00:39:00.146 "unmap": true, 00:39:00.146 "flush": true, 00:39:00.146 "reset": true, 00:39:00.146 "nvme_admin": false, 00:39:00.146 "nvme_io": false, 00:39:00.146 "nvme_io_md": false, 00:39:00.146 "write_zeroes": true, 00:39:00.146 "zcopy": true, 00:39:00.146 "get_zone_info": false, 00:39:00.146 "zone_management": false, 00:39:00.146 "zone_append": false, 00:39:00.146 "compare": false, 00:39:00.146 "compare_and_write": false, 00:39:00.146 "abort": true, 00:39:00.146 "seek_hole": false, 00:39:00.146 "seek_data": false, 00:39:00.146 "copy": true, 00:39:00.146 "nvme_iov_md": false 00:39:00.146 }, 00:39:00.146 "memory_domains": [ 00:39:00.146 { 00:39:00.146 "dma_device_id": "system", 00:39:00.146 "dma_device_type": 1 00:39:00.146 }, 00:39:00.146 { 00:39:00.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:00.146 "dma_device_type": 2 00:39:00.146 } 00:39:00.146 ], 00:39:00.146 "driver_specific": {} 00:39:00.146 }' 00:39:00.146 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:00.146 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:00.146 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:00.146 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:00.146 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:00.403 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:39:00.659 [2024-07-12 09:05:35.822594] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:00.917 09:05:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:01.174 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:01.174 "name": "Existed_Raid", 00:39:01.174 "uuid": "849f4287-ff05-4d59-93fa-0ea3953e4d7a", 00:39:01.174 "strip_size_kb": 0, 00:39:01.174 "state": "online", 00:39:01.174 "raid_level": "raid1", 00:39:01.174 "superblock": true, 00:39:01.174 "num_base_bdevs": 2, 00:39:01.174 "num_base_bdevs_discovered": 1, 00:39:01.174 "num_base_bdevs_operational": 1, 00:39:01.174 "base_bdevs_list": [ 00:39:01.174 { 00:39:01.174 "name": null, 00:39:01.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.174 "is_configured": false, 00:39:01.174 "data_offset": 256, 00:39:01.174 "data_size": 7936 00:39:01.174 }, 00:39:01.174 { 00:39:01.174 "name": "BaseBdev2", 00:39:01.174 "uuid": "67ca2a6e-8397-40d6-bfd1-38416d727456", 00:39:01.174 "is_configured": true, 00:39:01.174 "data_offset": 256, 00:39:01.174 "data_size": 7936 00:39:01.174 } 00:39:01.174 ] 00:39:01.174 }' 00:39:01.174 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:01.174 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:01.739 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:39:01.739 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:39:01.739 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:01.739 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:39:01.996 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:39:01.996 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:01.996 09:05:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:39:02.253 [2024-07-12 09:05:37.237847] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:02.253 [2024-07-12 09:05:37.237968] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:02.253 [2024-07-12 09:05:37.301560] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:02.253 [2024-07-12 09:05:37.301628] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:02.253 [2024-07-12 09:05:37.301641] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:39:02.253 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:39:02.253 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:39:02.253 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:02.253 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 161925 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 161925 ']' 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 161925 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161925 00:39:02.511 killing process with pid 161925 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161925' 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 161925 00:39:02.511 09:05:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 161925 00:39:02.511 [2024-07-12 09:05:37.593226] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:02.511 [2024-07-12 09:05:37.593354] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:03.446 ************************************ 00:39:03.446 END TEST raid_state_function_test_sb_4k 00:39:03.446 ************************************ 00:39:03.446 09:05:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:39:03.446 00:39:03.446 real 0m11.730s 00:39:03.446 user 0m21.096s 00:39:03.446 sys 0m1.309s 00:39:03.446 09:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:03.446 09:05:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:03.446 09:05:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:39:03.446 09:05:38 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:39:03.446 09:05:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:39:03.446 09:05:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:03.446 09:05:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:03.446 ************************************ 00:39:03.446 START TEST raid_superblock_test_4k 00:39:03.446 ************************************ 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=162305 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 162305 /var/tmp/spdk-raid.sock 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 162305 ']' 00:39:03.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:03.446 09:05:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:03.446 [2024-07-12 09:05:38.635892] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:39:03.446 [2024-07-12 09:05:38.636140] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162305 ] 00:39:03.704 [2024-07-12 09:05:38.803738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.962 [2024-07-12 09:05:38.999385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.220 [2024-07-12 09:05:39.171093] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:04.478 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:39:04.737 malloc1 00:39:04.737 09:05:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:04.996 [2024-07-12 09:05:40.016869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:04.996 [2024-07-12 09:05:40.016993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:04.996 [2024-07-12 09:05:40.017031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:39:04.996 [2024-07-12 09:05:40.017052] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:04.996 [2024-07-12 09:05:40.019121] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:04.996 [2024-07-12 09:05:40.019172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:04.996 pt1 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:04.996 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:39:05.254 malloc2 00:39:05.254 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:05.254 [2024-07-12 09:05:40.433154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:05.254 [2024-07-12 09:05:40.433279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:05.254 [2024-07-12 09:05:40.433321] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:39:05.254 [2024-07-12 09:05:40.433344] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:05.254 [2024-07-12 09:05:40.435690] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:05.254 [2024-07-12 09:05:40.435759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:05.254 pt2 00:39:05.254 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:39:05.255 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:39:05.255 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:39:05.513 [2024-07-12 09:05:40.637291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:05.513 [2024-07-12 09:05:40.639380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:05.513 [2024-07-12 09:05:40.639691] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:39:05.513 [2024-07-12 09:05:40.639715] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:05.513 [2024-07-12 09:05:40.639884] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:39:05.513 [2024-07-12 09:05:40.640396] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:39:05.513 [2024-07-12 09:05:40.640423] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:39:05.513 [2024-07-12 09:05:40.640613] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:05.513 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:05.772 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:05.772 "name": "raid_bdev1", 00:39:05.772 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:05.772 "strip_size_kb": 0, 00:39:05.772 "state": "online", 00:39:05.772 "raid_level": "raid1", 00:39:05.772 "superblock": true, 00:39:05.772 "num_base_bdevs": 2, 00:39:05.772 "num_base_bdevs_discovered": 2, 00:39:05.772 "num_base_bdevs_operational": 2, 00:39:05.772 "base_bdevs_list": [ 00:39:05.772 { 00:39:05.772 "name": "pt1", 00:39:05.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:05.772 "is_configured": true, 00:39:05.772 "data_offset": 256, 00:39:05.772 "data_size": 7936 00:39:05.772 }, 00:39:05.772 { 00:39:05.772 "name": "pt2", 00:39:05.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:05.772 "is_configured": true, 00:39:05.772 "data_offset": 256, 00:39:05.772 "data_size": 7936 00:39:05.772 } 00:39:05.772 ] 00:39:05.772 }' 00:39:05.772 09:05:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:05.772 09:05:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:06.707 [2024-07-12 09:05:41.729677] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:06.707 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:06.707 "name": "raid_bdev1", 00:39:06.707 "aliases": [ 00:39:06.707 "f7010708-6862-4b33-862e-cb0d8207411f" 00:39:06.707 ], 00:39:06.707 "product_name": "Raid Volume", 00:39:06.707 "block_size": 4096, 00:39:06.707 "num_blocks": 7936, 00:39:06.707 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:06.707 "assigned_rate_limits": { 00:39:06.707 "rw_ios_per_sec": 0, 00:39:06.707 "rw_mbytes_per_sec": 0, 00:39:06.707 "r_mbytes_per_sec": 0, 00:39:06.707 "w_mbytes_per_sec": 0 00:39:06.707 }, 00:39:06.707 "claimed": false, 00:39:06.707 "zoned": false, 00:39:06.707 "supported_io_types": { 00:39:06.707 "read": true, 00:39:06.707 "write": true, 00:39:06.707 "unmap": false, 00:39:06.707 "flush": false, 00:39:06.707 "reset": true, 00:39:06.707 "nvme_admin": false, 00:39:06.707 "nvme_io": false, 00:39:06.707 "nvme_io_md": false, 00:39:06.707 "write_zeroes": true, 00:39:06.707 "zcopy": false, 00:39:06.707 "get_zone_info": false, 00:39:06.707 "zone_management": false, 00:39:06.707 "zone_append": false, 00:39:06.708 "compare": false, 00:39:06.708 "compare_and_write": false, 00:39:06.708 "abort": false, 00:39:06.708 "seek_hole": false, 00:39:06.708 "seek_data": false, 00:39:06.708 "copy": false, 00:39:06.708 "nvme_iov_md": false 00:39:06.708 }, 00:39:06.708 "memory_domains": [ 00:39:06.708 { 00:39:06.708 "dma_device_id": "system", 00:39:06.708 "dma_device_type": 1 00:39:06.708 }, 00:39:06.708 { 00:39:06.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:06.708 "dma_device_type": 2 00:39:06.708 }, 00:39:06.708 { 00:39:06.708 "dma_device_id": "system", 00:39:06.708 "dma_device_type": 1 00:39:06.708 }, 00:39:06.708 { 00:39:06.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:06.708 "dma_device_type": 2 00:39:06.708 } 00:39:06.708 ], 00:39:06.708 "driver_specific": { 00:39:06.708 "raid": { 00:39:06.708 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:06.708 "strip_size_kb": 0, 00:39:06.708 "state": "online", 00:39:06.708 "raid_level": "raid1", 00:39:06.708 "superblock": true, 00:39:06.708 "num_base_bdevs": 2, 00:39:06.708 "num_base_bdevs_discovered": 2, 00:39:06.708 "num_base_bdevs_operational": 2, 00:39:06.708 "base_bdevs_list": [ 00:39:06.708 { 00:39:06.708 "name": "pt1", 00:39:06.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:06.708 "is_configured": true, 00:39:06.708 "data_offset": 256, 00:39:06.708 "data_size": 7936 00:39:06.708 }, 00:39:06.708 { 00:39:06.708 "name": "pt2", 00:39:06.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:06.708 "is_configured": true, 00:39:06.708 "data_offset": 256, 00:39:06.708 "data_size": 7936 00:39:06.708 } 00:39:06.708 ] 00:39:06.708 } 00:39:06.708 } 00:39:06.708 }' 00:39:06.708 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:06.708 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:39:06.708 pt2' 00:39:06.708 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:06.708 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:39:06.708 09:05:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:06.966 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:06.966 "name": "pt1", 00:39:06.966 "aliases": [ 00:39:06.966 "00000000-0000-0000-0000-000000000001" 00:39:06.966 ], 00:39:06.966 "product_name": "passthru", 00:39:06.966 "block_size": 4096, 00:39:06.966 "num_blocks": 8192, 00:39:06.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:06.966 "assigned_rate_limits": { 00:39:06.966 "rw_ios_per_sec": 0, 00:39:06.966 "rw_mbytes_per_sec": 0, 00:39:06.966 "r_mbytes_per_sec": 0, 00:39:06.966 "w_mbytes_per_sec": 0 00:39:06.966 }, 00:39:06.966 "claimed": true, 00:39:06.966 "claim_type": "exclusive_write", 00:39:06.966 "zoned": false, 00:39:06.966 "supported_io_types": { 00:39:06.966 "read": true, 00:39:06.966 "write": true, 00:39:06.966 "unmap": true, 00:39:06.967 "flush": true, 00:39:06.967 "reset": true, 00:39:06.967 "nvme_admin": false, 00:39:06.967 "nvme_io": false, 00:39:06.967 "nvme_io_md": false, 00:39:06.967 "write_zeroes": true, 00:39:06.967 "zcopy": true, 00:39:06.967 "get_zone_info": false, 00:39:06.967 "zone_management": false, 00:39:06.967 "zone_append": false, 00:39:06.967 "compare": false, 00:39:06.967 "compare_and_write": false, 00:39:06.967 "abort": true, 00:39:06.967 "seek_hole": false, 00:39:06.967 "seek_data": false, 00:39:06.967 "copy": true, 00:39:06.967 "nvme_iov_md": false 00:39:06.967 }, 00:39:06.967 "memory_domains": [ 00:39:06.967 { 00:39:06.967 "dma_device_id": "system", 00:39:06.967 "dma_device_type": 1 00:39:06.967 }, 00:39:06.967 { 00:39:06.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:06.967 "dma_device_type": 2 00:39:06.967 } 00:39:06.967 ], 00:39:06.967 "driver_specific": { 00:39:06.967 "passthru": { 00:39:06.967 "name": "pt1", 00:39:06.967 "base_bdev_name": "malloc1" 00:39:06.967 } 00:39:06.967 } 00:39:06.967 }' 00:39:06.967 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:06.967 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:07.237 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:07.509 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:07.509 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:07.509 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:07.509 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:39:07.509 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:07.509 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:07.509 "name": "pt2", 00:39:07.509 "aliases": [ 00:39:07.509 "00000000-0000-0000-0000-000000000002" 00:39:07.509 ], 00:39:07.509 "product_name": "passthru", 00:39:07.509 "block_size": 4096, 00:39:07.509 "num_blocks": 8192, 00:39:07.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:07.509 "assigned_rate_limits": { 00:39:07.509 "rw_ios_per_sec": 0, 00:39:07.509 "rw_mbytes_per_sec": 0, 00:39:07.509 "r_mbytes_per_sec": 0, 00:39:07.509 "w_mbytes_per_sec": 0 00:39:07.510 }, 00:39:07.510 "claimed": true, 00:39:07.510 "claim_type": "exclusive_write", 00:39:07.510 "zoned": false, 00:39:07.510 "supported_io_types": { 00:39:07.510 "read": true, 00:39:07.510 "write": true, 00:39:07.510 "unmap": true, 00:39:07.510 "flush": true, 00:39:07.510 "reset": true, 00:39:07.510 "nvme_admin": false, 00:39:07.510 "nvme_io": false, 00:39:07.510 "nvme_io_md": false, 00:39:07.510 "write_zeroes": true, 00:39:07.510 "zcopy": true, 00:39:07.510 "get_zone_info": false, 00:39:07.510 "zone_management": false, 00:39:07.510 "zone_append": false, 00:39:07.510 "compare": false, 00:39:07.510 "compare_and_write": false, 00:39:07.510 "abort": true, 00:39:07.510 "seek_hole": false, 00:39:07.510 "seek_data": false, 00:39:07.510 "copy": true, 00:39:07.510 "nvme_iov_md": false 00:39:07.510 }, 00:39:07.510 "memory_domains": [ 00:39:07.510 { 00:39:07.510 "dma_device_id": "system", 00:39:07.510 "dma_device_type": 1 00:39:07.510 }, 00:39:07.510 { 00:39:07.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:07.510 "dma_device_type": 2 00:39:07.510 } 00:39:07.510 ], 00:39:07.510 "driver_specific": { 00:39:07.510 "passthru": { 00:39:07.510 "name": "pt2", 00:39:07.510 "base_bdev_name": "malloc2" 00:39:07.510 } 00:39:07.510 } 00:39:07.510 }' 00:39:07.510 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:07.768 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:07.768 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:07.768 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:07.768 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:07.768 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:07.768 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:07.768 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:08.029 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:08.029 09:05:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:08.029 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:08.029 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:08.029 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:08.029 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:39:08.286 [2024-07-12 09:05:43.350059] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:08.286 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f7010708-6862-4b33-862e-cb0d8207411f 00:39:08.286 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z f7010708-6862-4b33-862e-cb0d8207411f ']' 00:39:08.286 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:08.544 [2024-07-12 09:05:43.549782] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:08.544 [2024-07-12 09:05:43.549819] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:08.544 [2024-07-12 09:05:43.549923] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:08.544 [2024-07-12 09:05:43.549991] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:08.544 [2024-07-12 09:05:43.550035] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:39:08.544 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:08.544 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:39:08.802 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:39:08.802 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:39:08.802 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:39:08.802 09:05:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:09.060 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:39:09.061 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:09.321 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:39:09.321 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:09.582 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:39:09.841 [2024-07-12 09:05:44.838058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:09.841 [2024-07-12 09:05:44.839846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:09.841 [2024-07-12 09:05:44.839950] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:09.841 [2024-07-12 09:05:44.840073] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:09.841 [2024-07-12 09:05:44.840161] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:09.841 [2024-07-12 09:05:44.840174] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:39:09.841 request: 00:39:09.841 { 00:39:09.841 "name": "raid_bdev1", 00:39:09.841 "raid_level": "raid1", 00:39:09.841 "base_bdevs": [ 00:39:09.841 "malloc1", 00:39:09.841 "malloc2" 00:39:09.841 ], 00:39:09.841 "superblock": false, 00:39:09.841 "method": "bdev_raid_create", 00:39:09.841 "req_id": 1 00:39:09.841 } 00:39:09.841 Got JSON-RPC error response 00:39:09.841 response: 00:39:09.841 { 00:39:09.841 "code": -17, 00:39:09.841 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:09.841 } 00:39:09.841 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:39:09.841 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.841 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.841 09:05:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.841 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:09.841 09:05:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:39:10.099 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:39:10.099 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:39:10.099 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:10.357 [2024-07-12 09:05:45.302153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:10.357 [2024-07-12 09:05:45.302294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:10.357 [2024-07-12 09:05:45.302331] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:39:10.357 [2024-07-12 09:05:45.302372] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:10.357 [2024-07-12 09:05:45.304817] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:10.357 [2024-07-12 09:05:45.304917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:10.357 [2024-07-12 09:05:45.305083] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:10.357 [2024-07-12 09:05:45.305199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:10.357 pt1 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:10.357 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.616 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:10.616 "name": "raid_bdev1", 00:39:10.616 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:10.616 "strip_size_kb": 0, 00:39:10.616 "state": "configuring", 00:39:10.616 "raid_level": "raid1", 00:39:10.616 "superblock": true, 00:39:10.616 "num_base_bdevs": 2, 00:39:10.616 "num_base_bdevs_discovered": 1, 00:39:10.616 "num_base_bdevs_operational": 2, 00:39:10.616 "base_bdevs_list": [ 00:39:10.616 { 00:39:10.616 "name": "pt1", 00:39:10.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:10.616 "is_configured": true, 00:39:10.616 "data_offset": 256, 00:39:10.616 "data_size": 7936 00:39:10.616 }, 00:39:10.616 { 00:39:10.616 "name": null, 00:39:10.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:10.616 "is_configured": false, 00:39:10.616 "data_offset": 256, 00:39:10.616 "data_size": 7936 00:39:10.616 } 00:39:10.616 ] 00:39:10.616 }' 00:39:10.616 09:05:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:10.616 09:05:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:11.184 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:39:11.184 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:39:11.184 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:11.184 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:11.440 [2024-07-12 09:05:46.484980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:11.440 [2024-07-12 09:05:46.485141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:11.440 [2024-07-12 09:05:46.485181] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:11.440 [2024-07-12 09:05:46.485208] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:11.440 [2024-07-12 09:05:46.485890] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:11.440 [2024-07-12 09:05:46.485976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:11.440 [2024-07-12 09:05:46.486106] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:11.440 [2024-07-12 09:05:46.486136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:11.440 [2024-07-12 09:05:46.486336] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:39:11.440 [2024-07-12 09:05:46.486363] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:11.440 [2024-07-12 09:05:46.486497] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:11.440 [2024-07-12 09:05:46.486922] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:39:11.440 [2024-07-12 09:05:46.486963] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:39:11.440 [2024-07-12 09:05:46.487151] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:11.440 pt2 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:11.440 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.696 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:11.696 "name": "raid_bdev1", 00:39:11.696 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:11.696 "strip_size_kb": 0, 00:39:11.696 "state": "online", 00:39:11.696 "raid_level": "raid1", 00:39:11.696 "superblock": true, 00:39:11.696 "num_base_bdevs": 2, 00:39:11.696 "num_base_bdevs_discovered": 2, 00:39:11.696 "num_base_bdevs_operational": 2, 00:39:11.696 "base_bdevs_list": [ 00:39:11.696 { 00:39:11.696 "name": "pt1", 00:39:11.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:11.696 "is_configured": true, 00:39:11.696 "data_offset": 256, 00:39:11.696 "data_size": 7936 00:39:11.696 }, 00:39:11.696 { 00:39:11.696 "name": "pt2", 00:39:11.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:11.696 "is_configured": true, 00:39:11.696 "data_offset": 256, 00:39:11.696 "data_size": 7936 00:39:11.696 } 00:39:11.696 ] 00:39:11.696 }' 00:39:11.697 09:05:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:11.697 09:05:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:12.261 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:39:12.521 [2024-07-12 09:05:47.633507] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:12.521 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:39:12.521 "name": "raid_bdev1", 00:39:12.521 "aliases": [ 00:39:12.521 "f7010708-6862-4b33-862e-cb0d8207411f" 00:39:12.521 ], 00:39:12.521 "product_name": "Raid Volume", 00:39:12.521 "block_size": 4096, 00:39:12.521 "num_blocks": 7936, 00:39:12.521 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:12.521 "assigned_rate_limits": { 00:39:12.521 "rw_ios_per_sec": 0, 00:39:12.521 "rw_mbytes_per_sec": 0, 00:39:12.521 "r_mbytes_per_sec": 0, 00:39:12.521 "w_mbytes_per_sec": 0 00:39:12.521 }, 00:39:12.521 "claimed": false, 00:39:12.521 "zoned": false, 00:39:12.521 "supported_io_types": { 00:39:12.521 "read": true, 00:39:12.521 "write": true, 00:39:12.521 "unmap": false, 00:39:12.521 "flush": false, 00:39:12.521 "reset": true, 00:39:12.521 "nvme_admin": false, 00:39:12.521 "nvme_io": false, 00:39:12.521 "nvme_io_md": false, 00:39:12.521 "write_zeroes": true, 00:39:12.521 "zcopy": false, 00:39:12.521 "get_zone_info": false, 00:39:12.521 "zone_management": false, 00:39:12.521 "zone_append": false, 00:39:12.521 "compare": false, 00:39:12.521 "compare_and_write": false, 00:39:12.521 "abort": false, 00:39:12.521 "seek_hole": false, 00:39:12.521 "seek_data": false, 00:39:12.521 "copy": false, 00:39:12.521 "nvme_iov_md": false 00:39:12.521 }, 00:39:12.521 "memory_domains": [ 00:39:12.521 { 00:39:12.521 "dma_device_id": "system", 00:39:12.521 "dma_device_type": 1 00:39:12.521 }, 00:39:12.521 { 00:39:12.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:12.521 "dma_device_type": 2 00:39:12.521 }, 00:39:12.521 { 00:39:12.521 "dma_device_id": "system", 00:39:12.521 "dma_device_type": 1 00:39:12.521 }, 00:39:12.521 { 00:39:12.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:12.521 "dma_device_type": 2 00:39:12.521 } 00:39:12.521 ], 00:39:12.521 "driver_specific": { 00:39:12.521 "raid": { 00:39:12.521 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:12.521 "strip_size_kb": 0, 00:39:12.521 "state": "online", 00:39:12.521 "raid_level": "raid1", 00:39:12.521 "superblock": true, 00:39:12.521 "num_base_bdevs": 2, 00:39:12.521 "num_base_bdevs_discovered": 2, 00:39:12.521 "num_base_bdevs_operational": 2, 00:39:12.521 "base_bdevs_list": [ 00:39:12.521 { 00:39:12.521 "name": "pt1", 00:39:12.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:12.521 "is_configured": true, 00:39:12.521 "data_offset": 256, 00:39:12.521 "data_size": 7936 00:39:12.521 }, 00:39:12.521 { 00:39:12.521 "name": "pt2", 00:39:12.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:12.521 "is_configured": true, 00:39:12.521 "data_offset": 256, 00:39:12.521 "data_size": 7936 00:39:12.521 } 00:39:12.521 ] 00:39:12.521 } 00:39:12.521 } 00:39:12.521 }' 00:39:12.521 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:12.780 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:39:12.780 pt2' 00:39:12.780 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:12.780 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:39:12.780 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:13.038 09:05:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:13.039 "name": "pt1", 00:39:13.039 "aliases": [ 00:39:13.039 "00000000-0000-0000-0000-000000000001" 00:39:13.039 ], 00:39:13.039 "product_name": "passthru", 00:39:13.039 "block_size": 4096, 00:39:13.039 "num_blocks": 8192, 00:39:13.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:13.039 "assigned_rate_limits": { 00:39:13.039 "rw_ios_per_sec": 0, 00:39:13.039 "rw_mbytes_per_sec": 0, 00:39:13.039 "r_mbytes_per_sec": 0, 00:39:13.039 "w_mbytes_per_sec": 0 00:39:13.039 }, 00:39:13.039 "claimed": true, 00:39:13.039 "claim_type": "exclusive_write", 00:39:13.039 "zoned": false, 00:39:13.039 "supported_io_types": { 00:39:13.039 "read": true, 00:39:13.039 "write": true, 00:39:13.039 "unmap": true, 00:39:13.039 "flush": true, 00:39:13.039 "reset": true, 00:39:13.039 "nvme_admin": false, 00:39:13.039 "nvme_io": false, 00:39:13.039 "nvme_io_md": false, 00:39:13.039 "write_zeroes": true, 00:39:13.039 "zcopy": true, 00:39:13.039 "get_zone_info": false, 00:39:13.039 "zone_management": false, 00:39:13.039 "zone_append": false, 00:39:13.039 "compare": false, 00:39:13.039 "compare_and_write": false, 00:39:13.039 "abort": true, 00:39:13.039 "seek_hole": false, 00:39:13.039 "seek_data": false, 00:39:13.039 "copy": true, 00:39:13.039 "nvme_iov_md": false 00:39:13.039 }, 00:39:13.039 "memory_domains": [ 00:39:13.039 { 00:39:13.039 "dma_device_id": "system", 00:39:13.039 "dma_device_type": 1 00:39:13.039 }, 00:39:13.039 { 00:39:13.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:13.039 "dma_device_type": 2 00:39:13.039 } 00:39:13.039 ], 00:39:13.039 "driver_specific": { 00:39:13.039 "passthru": { 00:39:13.039 "name": "pt1", 00:39:13.039 "base_bdev_name": "malloc1" 00:39:13.039 } 00:39:13.039 } 00:39:13.039 }' 00:39:13.039 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:13.039 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:13.039 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:13.039 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.039 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.039 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:13.039 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:39:13.297 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:39:13.556 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:39:13.556 "name": "pt2", 00:39:13.556 "aliases": [ 00:39:13.556 "00000000-0000-0000-0000-000000000002" 00:39:13.556 ], 00:39:13.556 "product_name": "passthru", 00:39:13.556 "block_size": 4096, 00:39:13.556 "num_blocks": 8192, 00:39:13.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:13.556 "assigned_rate_limits": { 00:39:13.556 "rw_ios_per_sec": 0, 00:39:13.556 "rw_mbytes_per_sec": 0, 00:39:13.556 "r_mbytes_per_sec": 0, 00:39:13.556 "w_mbytes_per_sec": 0 00:39:13.556 }, 00:39:13.556 "claimed": true, 00:39:13.556 "claim_type": "exclusive_write", 00:39:13.556 "zoned": false, 00:39:13.556 "supported_io_types": { 00:39:13.556 "read": true, 00:39:13.556 "write": true, 00:39:13.556 "unmap": true, 00:39:13.556 "flush": true, 00:39:13.556 "reset": true, 00:39:13.556 "nvme_admin": false, 00:39:13.556 "nvme_io": false, 00:39:13.556 "nvme_io_md": false, 00:39:13.556 "write_zeroes": true, 00:39:13.556 "zcopy": true, 00:39:13.556 "get_zone_info": false, 00:39:13.556 "zone_management": false, 00:39:13.556 "zone_append": false, 00:39:13.556 "compare": false, 00:39:13.556 "compare_and_write": false, 00:39:13.556 "abort": true, 00:39:13.556 "seek_hole": false, 00:39:13.556 "seek_data": false, 00:39:13.556 "copy": true, 00:39:13.556 "nvme_iov_md": false 00:39:13.556 }, 00:39:13.556 "memory_domains": [ 00:39:13.556 { 00:39:13.556 "dma_device_id": "system", 00:39:13.556 "dma_device_type": 1 00:39:13.556 }, 00:39:13.556 { 00:39:13.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:13.556 "dma_device_type": 2 00:39:13.556 } 00:39:13.556 ], 00:39:13.556 "driver_specific": { 00:39:13.556 "passthru": { 00:39:13.556 "name": "pt2", 00:39:13.556 "base_bdev_name": "malloc2" 00:39:13.556 } 00:39:13.556 } 00:39:13.556 }' 00:39:13.556 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:13.556 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:39:13.814 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:39:13.814 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.814 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:39:13.814 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:39:13.814 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:13.814 09:05:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:39:14.073 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:39:14.073 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:14.073 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:39:14.073 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:39:14.073 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:39:14.073 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:14.331 [2024-07-12 09:05:49.370026] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:14.331 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' f7010708-6862-4b33-862e-cb0d8207411f '!=' f7010708-6862-4b33-862e-cb0d8207411f ']' 00:39:14.331 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:39:14.331 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:39:14.331 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:39:14.331 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:14.588 [2024-07-12 09:05:49.573853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:14.588 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:14.845 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:14.845 "name": "raid_bdev1", 00:39:14.845 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:14.845 "strip_size_kb": 0, 00:39:14.845 "state": "online", 00:39:14.845 "raid_level": "raid1", 00:39:14.845 "superblock": true, 00:39:14.845 "num_base_bdevs": 2, 00:39:14.845 "num_base_bdevs_discovered": 1, 00:39:14.845 "num_base_bdevs_operational": 1, 00:39:14.845 "base_bdevs_list": [ 00:39:14.845 { 00:39:14.845 "name": null, 00:39:14.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:14.845 "is_configured": false, 00:39:14.845 "data_offset": 256, 00:39:14.845 "data_size": 7936 00:39:14.845 }, 00:39:14.845 { 00:39:14.845 "name": "pt2", 00:39:14.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:14.845 "is_configured": true, 00:39:14.845 "data_offset": 256, 00:39:14.845 "data_size": 7936 00:39:14.845 } 00:39:14.845 ] 00:39:14.845 }' 00:39:14.845 09:05:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:14.845 09:05:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:15.412 09:05:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:15.670 [2024-07-12 09:05:50.776790] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:15.670 [2024-07-12 09:05:50.776833] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:15.670 [2024-07-12 09:05:50.776921] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:15.670 [2024-07-12 09:05:50.777064] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:15.670 [2024-07-12 09:05:50.777080] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:39:15.670 09:05:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:15.670 09:05:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:39:15.929 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:39:15.929 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:39:15.929 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:39:15.929 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:15.929 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:16.187 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:39:16.187 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:39:16.187 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:39:16.187 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:39:16.187 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:39:16.187 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:16.444 [2024-07-12 09:05:51.504948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:16.444 [2024-07-12 09:05:51.505087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:16.444 [2024-07-12 09:05:51.505121] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:39:16.444 [2024-07-12 09:05:51.505148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:16.444 [2024-07-12 09:05:51.507479] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:16.444 [2024-07-12 09:05:51.507560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:16.444 [2024-07-12 09:05:51.507683] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:16.444 [2024-07-12 09:05:51.507799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:16.444 [2024-07-12 09:05:51.507931] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:39:16.444 [2024-07-12 09:05:51.507956] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:16.444 [2024-07-12 09:05:51.508054] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:16.444 [2024-07-12 09:05:51.508438] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:39:16.444 [2024-07-12 09:05:51.508463] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:39:16.444 [2024-07-12 09:05:51.508663] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:16.444 pt2 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:16.444 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:16.701 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:16.701 "name": "raid_bdev1", 00:39:16.701 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:16.701 "strip_size_kb": 0, 00:39:16.701 "state": "online", 00:39:16.701 "raid_level": "raid1", 00:39:16.701 "superblock": true, 00:39:16.701 "num_base_bdevs": 2, 00:39:16.701 "num_base_bdevs_discovered": 1, 00:39:16.701 "num_base_bdevs_operational": 1, 00:39:16.701 "base_bdevs_list": [ 00:39:16.702 { 00:39:16.702 "name": null, 00:39:16.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:16.702 "is_configured": false, 00:39:16.702 "data_offset": 256, 00:39:16.702 "data_size": 7936 00:39:16.702 }, 00:39:16.702 { 00:39:16.702 "name": "pt2", 00:39:16.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:16.702 "is_configured": true, 00:39:16.702 "data_offset": 256, 00:39:16.702 "data_size": 7936 00:39:16.702 } 00:39:16.702 ] 00:39:16.702 }' 00:39:16.702 09:05:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:16.702 09:05:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:17.269 09:05:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:17.527 [2024-07-12 09:05:52.657360] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:17.527 [2024-07-12 09:05:52.657397] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:17.527 [2024-07-12 09:05:52.657490] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:17.527 [2024-07-12 09:05:52.657559] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:17.527 [2024-07-12 09:05:52.657586] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:39:17.527 09:05:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:17.527 09:05:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:39:17.785 09:05:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:39:17.785 09:05:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:39:17.785 09:05:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:39:17.785 09:05:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:18.043 [2024-07-12 09:05:53.081405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:18.043 [2024-07-12 09:05:53.081516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:18.043 [2024-07-12 09:05:53.081575] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:18.043 [2024-07-12 09:05:53.081598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:18.043 [2024-07-12 09:05:53.084007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:18.043 [2024-07-12 09:05:53.084086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:18.043 [2024-07-12 09:05:53.084246] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:18.043 [2024-07-12 09:05:53.084340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:18.043 [2024-07-12 09:05:53.084514] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:18.043 [2024-07-12 09:05:53.084542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:18.043 [2024-07-12 09:05:53.084561] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:39:18.043 [2024-07-12 09:05:53.084629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:18.043 [2024-07-12 09:05:53.084722] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:39:18.043 [2024-07-12 09:05:53.084736] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:18.043 [2024-07-12 09:05:53.084842] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:18.043 [2024-07-12 09:05:53.085223] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:39:18.043 [2024-07-12 09:05:53.085250] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:39:18.043 [2024-07-12 09:05:53.085445] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:18.043 pt1 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:18.043 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.300 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:18.300 "name": "raid_bdev1", 00:39:18.300 "uuid": "f7010708-6862-4b33-862e-cb0d8207411f", 00:39:18.300 "strip_size_kb": 0, 00:39:18.300 "state": "online", 00:39:18.300 "raid_level": "raid1", 00:39:18.300 "superblock": true, 00:39:18.300 "num_base_bdevs": 2, 00:39:18.300 "num_base_bdevs_discovered": 1, 00:39:18.300 "num_base_bdevs_operational": 1, 00:39:18.300 "base_bdevs_list": [ 00:39:18.300 { 00:39:18.300 "name": null, 00:39:18.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.300 "is_configured": false, 00:39:18.300 "data_offset": 256, 00:39:18.300 "data_size": 7936 00:39:18.300 }, 00:39:18.300 { 00:39:18.300 "name": "pt2", 00:39:18.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:18.300 "is_configured": true, 00:39:18.300 "data_offset": 256, 00:39:18.300 "data_size": 7936 00:39:18.300 } 00:39:18.300 ] 00:39:18.300 }' 00:39:18.300 09:05:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:18.300 09:05:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:19.234 09:05:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:39:19.234 09:05:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:19.234 09:05:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:39:19.234 09:05:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:19.234 09:05:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:39:19.491 [2024-07-12 09:05:54.598097] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' f7010708-6862-4b33-862e-cb0d8207411f '!=' f7010708-6862-4b33-862e-cb0d8207411f ']' 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 162305 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 162305 ']' 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 162305 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162305 00:39:19.491 killing process with pid 162305 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162305' 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 162305 00:39:19.491 09:05:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 162305 00:39:19.491 [2024-07-12 09:05:54.632465] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:19.491 [2024-07-12 09:05:54.632547] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:19.491 [2024-07-12 09:05:54.632605] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:19.491 [2024-07-12 09:05:54.632628] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:39:19.748 [2024-07-12 09:05:54.762378] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:20.679 ************************************ 00:39:20.679 END TEST raid_superblock_test_4k 00:39:20.679 ************************************ 00:39:20.679 09:05:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:39:20.679 00:39:20.679 real 0m17.157s 00:39:20.679 user 0m31.931s 00:39:20.679 sys 0m1.793s 00:39:20.679 09:05:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:20.679 09:05:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:39:20.679 09:05:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:39:20.679 09:05:55 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:39:20.679 09:05:55 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:39:20.679 09:05:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:39:20.679 09:05:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:20.679 09:05:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:20.679 ************************************ 00:39:20.679 START TEST raid_rebuild_test_sb_4k 00:39:20.679 ************************************ 00:39:20.679 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:39:20.679 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:39:20.679 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:39:20.679 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:39:20.679 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:39:20.679 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:39:20.679 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=162871 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 162871 /var/tmp/spdk-raid.sock 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 162871 ']' 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:20.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:20.680 09:05:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:20.680 [2024-07-12 09:05:55.849012] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:39:20.680 [2024-07-12 09:05:55.849386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162871 ] 00:39:20.680 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:20.680 Zero copy mechanism will not be used. 00:39:20.938 [2024-07-12 09:05:56.007444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.196 [2024-07-12 09:05:56.248829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.453 [2024-07-12 09:05:56.434174] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:21.711 09:05:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:21.711 09:05:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:39:21.711 09:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:39:21.711 09:05:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:39:21.970 BaseBdev1_malloc 00:39:21.970 09:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:22.227 [2024-07-12 09:05:57.317739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:22.227 [2024-07-12 09:05:57.318037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:22.227 [2024-07-12 09:05:57.318223] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:39:22.227 [2024-07-12 09:05:57.318374] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:22.227 [2024-07-12 09:05:57.320863] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:22.227 [2024-07-12 09:05:57.321115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:22.227 BaseBdev1 00:39:22.227 09:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:39:22.227 09:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:39:22.529 BaseBdev2_malloc 00:39:22.529 09:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:22.787 [2024-07-12 09:05:57.939709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:22.787 [2024-07-12 09:05:57.940052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:22.787 [2024-07-12 09:05:57.940236] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:39:22.787 [2024-07-12 09:05:57.940419] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:22.787 [2024-07-12 09:05:57.943002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:22.787 [2024-07-12 09:05:57.943166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:22.787 BaseBdev2 00:39:22.787 09:05:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:39:23.043 spare_malloc 00:39:23.044 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:23.302 spare_delay 00:39:23.303 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:23.560 [2024-07-12 09:05:58.570816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:23.560 [2024-07-12 09:05:58.571171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:23.560 [2024-07-12 09:05:58.571408] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:39:23.560 [2024-07-12 09:05:58.571582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:23.560 [2024-07-12 09:05:58.575135] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:23.560 [2024-07-12 09:05:58.575347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:23.560 spare 00:39:23.560 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:39:23.818 [2024-07-12 09:05:58.815944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:23.818 [2024-07-12 09:05:58.818977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:23.818 [2024-07-12 09:05:58.819436] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:39:23.818 [2024-07-12 09:05:58.819598] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:23.818 [2024-07-12 09:05:58.819835] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:39:23.818 [2024-07-12 09:05:58.820527] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:39:23.818 [2024-07-12 09:05:58.820677] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:39:23.818 [2024-07-12 09:05:58.821084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.818 09:05:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.076 09:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:24.076 "name": "raid_bdev1", 00:39:24.076 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:24.076 "strip_size_kb": 0, 00:39:24.076 "state": "online", 00:39:24.076 "raid_level": "raid1", 00:39:24.076 "superblock": true, 00:39:24.076 "num_base_bdevs": 2, 00:39:24.076 "num_base_bdevs_discovered": 2, 00:39:24.076 "num_base_bdevs_operational": 2, 00:39:24.076 "base_bdevs_list": [ 00:39:24.076 { 00:39:24.076 "name": "BaseBdev1", 00:39:24.076 "uuid": "b806bb3f-6843-5919-86b4-0a5fea5ed5eb", 00:39:24.076 "is_configured": true, 00:39:24.076 "data_offset": 256, 00:39:24.076 "data_size": 7936 00:39:24.076 }, 00:39:24.076 { 00:39:24.076 "name": "BaseBdev2", 00:39:24.076 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:24.076 "is_configured": true, 00:39:24.076 "data_offset": 256, 00:39:24.076 "data_size": 7936 00:39:24.076 } 00:39:24.076 ] 00:39:24.076 }' 00:39:24.076 09:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:24.076 09:05:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:24.642 09:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:24.642 09:05:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:39:24.914 [2024-07-12 09:06:00.005764] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:24.914 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:39:24.914 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:24.914 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:25.183 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:39:25.442 [2024-07-12 09:06:00.433627] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:39:25.442 /dev/nbd0 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:25.442 1+0 records in 00:39:25.442 1+0 records out 00:39:25.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593957 s, 6.9 MB/s 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:39:25.442 09:06:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:39:26.376 7936+0 records in 00:39:26.376 7936+0 records out 00:39:26.376 32505856 bytes (33 MB, 31 MiB) copied, 0.912582 s, 35.6 MB/s 00:39:26.376 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:39:26.376 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:26.376 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:26.376 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:26.376 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:39:26.376 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:26.376 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:26.634 [2024-07-12 09:06:01.690377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:39:26.634 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:39:26.892 [2024-07-12 09:06:01.890048] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.892 09:06:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.149 09:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:27.149 "name": "raid_bdev1", 00:39:27.149 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:27.149 "strip_size_kb": 0, 00:39:27.149 "state": "online", 00:39:27.149 "raid_level": "raid1", 00:39:27.149 "superblock": true, 00:39:27.149 "num_base_bdevs": 2, 00:39:27.149 "num_base_bdevs_discovered": 1, 00:39:27.149 "num_base_bdevs_operational": 1, 00:39:27.149 "base_bdevs_list": [ 00:39:27.149 { 00:39:27.149 "name": null, 00:39:27.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.149 "is_configured": false, 00:39:27.149 "data_offset": 256, 00:39:27.149 "data_size": 7936 00:39:27.149 }, 00:39:27.149 { 00:39:27.149 "name": "BaseBdev2", 00:39:27.149 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:27.149 "is_configured": true, 00:39:27.149 "data_offset": 256, 00:39:27.149 "data_size": 7936 00:39:27.149 } 00:39:27.149 ] 00:39:27.149 }' 00:39:27.149 09:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:27.149 09:06:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:27.715 09:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:27.973 [2024-07-12 09:06:02.966255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:27.973 [2024-07-12 09:06:02.978177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:39:27.973 [2024-07-12 09:06:02.980021] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:27.973 09:06:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:39:28.912 09:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:28.912 09:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:28.912 09:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:28.912 09:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:28.912 09:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:28.912 09:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:28.912 09:06:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.170 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:29.170 "name": "raid_bdev1", 00:39:29.170 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:29.170 "strip_size_kb": 0, 00:39:29.170 "state": "online", 00:39:29.170 "raid_level": "raid1", 00:39:29.170 "superblock": true, 00:39:29.170 "num_base_bdevs": 2, 00:39:29.170 "num_base_bdevs_discovered": 2, 00:39:29.171 "num_base_bdevs_operational": 2, 00:39:29.171 "process": { 00:39:29.171 "type": "rebuild", 00:39:29.171 "target": "spare", 00:39:29.171 "progress": { 00:39:29.171 "blocks": 3072, 00:39:29.171 "percent": 38 00:39:29.171 } 00:39:29.171 }, 00:39:29.171 "base_bdevs_list": [ 00:39:29.171 { 00:39:29.171 "name": "spare", 00:39:29.171 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:29.171 "is_configured": true, 00:39:29.171 "data_offset": 256, 00:39:29.171 "data_size": 7936 00:39:29.171 }, 00:39:29.171 { 00:39:29.171 "name": "BaseBdev2", 00:39:29.171 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:29.171 "is_configured": true, 00:39:29.171 "data_offset": 256, 00:39:29.171 "data_size": 7936 00:39:29.171 } 00:39:29.171 ] 00:39:29.171 }' 00:39:29.171 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:29.171 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:29.171 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:29.429 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:29.429 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:29.688 [2024-07-12 09:06:04.662177] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:29.688 [2024-07-12 09:06:04.689901] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:29.688 [2024-07-12 09:06:04.690129] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:29.688 [2024-07-12 09:06:04.690250] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:29.688 [2024-07-12 09:06:04.690338] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:29.688 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.947 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:29.947 "name": "raid_bdev1", 00:39:29.947 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:29.947 "strip_size_kb": 0, 00:39:29.947 "state": "online", 00:39:29.947 "raid_level": "raid1", 00:39:29.947 "superblock": true, 00:39:29.947 "num_base_bdevs": 2, 00:39:29.947 "num_base_bdevs_discovered": 1, 00:39:29.947 "num_base_bdevs_operational": 1, 00:39:29.947 "base_bdevs_list": [ 00:39:29.947 { 00:39:29.947 "name": null, 00:39:29.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.947 "is_configured": false, 00:39:29.947 "data_offset": 256, 00:39:29.947 "data_size": 7936 00:39:29.947 }, 00:39:29.947 { 00:39:29.947 "name": "BaseBdev2", 00:39:29.947 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:29.947 "is_configured": true, 00:39:29.947 "data_offset": 256, 00:39:29.947 "data_size": 7936 00:39:29.947 } 00:39:29.947 ] 00:39:29.947 }' 00:39:29.947 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:29.947 09:06:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:30.513 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:30.513 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:30.513 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:30.513 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:30.513 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:30.513 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:30.513 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:30.771 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:30.771 "name": "raid_bdev1", 00:39:30.771 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:30.771 "strip_size_kb": 0, 00:39:30.771 "state": "online", 00:39:30.771 "raid_level": "raid1", 00:39:30.771 "superblock": true, 00:39:30.771 "num_base_bdevs": 2, 00:39:30.771 "num_base_bdevs_discovered": 1, 00:39:30.771 "num_base_bdevs_operational": 1, 00:39:30.771 "base_bdevs_list": [ 00:39:30.771 { 00:39:30.771 "name": null, 00:39:30.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.771 "is_configured": false, 00:39:30.771 "data_offset": 256, 00:39:30.771 "data_size": 7936 00:39:30.771 }, 00:39:30.771 { 00:39:30.771 "name": "BaseBdev2", 00:39:30.771 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:30.771 "is_configured": true, 00:39:30.771 "data_offset": 256, 00:39:30.771 "data_size": 7936 00:39:30.771 } 00:39:30.771 ] 00:39:30.771 }' 00:39:30.771 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:30.771 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:30.771 09:06:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:31.028 09:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:31.028 09:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:31.286 [2024-07-12 09:06:06.236002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:31.286 [2024-07-12 09:06:06.247611] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:39:31.286 [2024-07-12 09:06:06.249610] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:31.286 09:06:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:39:32.219 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:32.219 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:32.219 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:32.219 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:32.219 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:32.219 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:32.219 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:32.477 "name": "raid_bdev1", 00:39:32.477 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:32.477 "strip_size_kb": 0, 00:39:32.477 "state": "online", 00:39:32.477 "raid_level": "raid1", 00:39:32.477 "superblock": true, 00:39:32.477 "num_base_bdevs": 2, 00:39:32.477 "num_base_bdevs_discovered": 2, 00:39:32.477 "num_base_bdevs_operational": 2, 00:39:32.477 "process": { 00:39:32.477 "type": "rebuild", 00:39:32.477 "target": "spare", 00:39:32.477 "progress": { 00:39:32.477 "blocks": 3072, 00:39:32.477 "percent": 38 00:39:32.477 } 00:39:32.477 }, 00:39:32.477 "base_bdevs_list": [ 00:39:32.477 { 00:39:32.477 "name": "spare", 00:39:32.477 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:32.477 "is_configured": true, 00:39:32.477 "data_offset": 256, 00:39:32.477 "data_size": 7936 00:39:32.477 }, 00:39:32.477 { 00:39:32.477 "name": "BaseBdev2", 00:39:32.477 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:32.477 "is_configured": true, 00:39:32.477 "data_offset": 256, 00:39:32.477 "data_size": 7936 00:39:32.477 } 00:39:32.477 ] 00:39:32.477 }' 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:39:32.477 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1466 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:32.477 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:32.735 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:32.735 "name": "raid_bdev1", 00:39:32.735 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:32.735 "strip_size_kb": 0, 00:39:32.735 "state": "online", 00:39:32.735 "raid_level": "raid1", 00:39:32.735 "superblock": true, 00:39:32.735 "num_base_bdevs": 2, 00:39:32.735 "num_base_bdevs_discovered": 2, 00:39:32.735 "num_base_bdevs_operational": 2, 00:39:32.735 "process": { 00:39:32.735 "type": "rebuild", 00:39:32.735 "target": "spare", 00:39:32.735 "progress": { 00:39:32.735 "blocks": 3840, 00:39:32.735 "percent": 48 00:39:32.735 } 00:39:32.735 }, 00:39:32.735 "base_bdevs_list": [ 00:39:32.735 { 00:39:32.735 "name": "spare", 00:39:32.736 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:32.736 "is_configured": true, 00:39:32.736 "data_offset": 256, 00:39:32.736 "data_size": 7936 00:39:32.736 }, 00:39:32.736 { 00:39:32.736 "name": "BaseBdev2", 00:39:32.736 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:32.736 "is_configured": true, 00:39:32.736 "data_offset": 256, 00:39:32.736 "data_size": 7936 00:39:32.736 } 00:39:32.736 ] 00:39:32.736 }' 00:39:32.736 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:32.736 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:32.736 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:32.994 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:32.994 09:06:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:33.929 09:06:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:34.189 09:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:34.189 "name": "raid_bdev1", 00:39:34.189 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:34.189 "strip_size_kb": 0, 00:39:34.189 "state": "online", 00:39:34.189 "raid_level": "raid1", 00:39:34.189 "superblock": true, 00:39:34.189 "num_base_bdevs": 2, 00:39:34.189 "num_base_bdevs_discovered": 2, 00:39:34.189 "num_base_bdevs_operational": 2, 00:39:34.189 "process": { 00:39:34.189 "type": "rebuild", 00:39:34.189 "target": "spare", 00:39:34.189 "progress": { 00:39:34.189 "blocks": 7424, 00:39:34.189 "percent": 93 00:39:34.189 } 00:39:34.189 }, 00:39:34.189 "base_bdevs_list": [ 00:39:34.189 { 00:39:34.189 "name": "spare", 00:39:34.189 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:34.189 "is_configured": true, 00:39:34.189 "data_offset": 256, 00:39:34.189 "data_size": 7936 00:39:34.189 }, 00:39:34.189 { 00:39:34.189 "name": "BaseBdev2", 00:39:34.189 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:34.189 "is_configured": true, 00:39:34.189 "data_offset": 256, 00:39:34.189 "data_size": 7936 00:39:34.189 } 00:39:34.189 ] 00:39:34.189 }' 00:39:34.189 09:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:34.189 09:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:34.189 09:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:34.189 09:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:34.189 09:06:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:39:34.189 [2024-07-12 09:06:09.366477] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:34.189 [2024-07-12 09:06:09.366713] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:34.189 [2024-07-12 09:06:09.366964] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:35.563 "name": "raid_bdev1", 00:39:35.563 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:35.563 "strip_size_kb": 0, 00:39:35.563 "state": "online", 00:39:35.563 "raid_level": "raid1", 00:39:35.563 "superblock": true, 00:39:35.563 "num_base_bdevs": 2, 00:39:35.563 "num_base_bdevs_discovered": 2, 00:39:35.563 "num_base_bdevs_operational": 2, 00:39:35.563 "base_bdevs_list": [ 00:39:35.563 { 00:39:35.563 "name": "spare", 00:39:35.563 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:35.563 "is_configured": true, 00:39:35.563 "data_offset": 256, 00:39:35.563 "data_size": 7936 00:39:35.563 }, 00:39:35.563 { 00:39:35.563 "name": "BaseBdev2", 00:39:35.563 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:35.563 "is_configured": true, 00:39:35.563 "data_offset": 256, 00:39:35.563 "data_size": 7936 00:39:35.563 } 00:39:35.563 ] 00:39:35.563 }' 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:35.563 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:35.821 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:35.821 "name": "raid_bdev1", 00:39:35.821 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:35.821 "strip_size_kb": 0, 00:39:35.821 "state": "online", 00:39:35.821 "raid_level": "raid1", 00:39:35.821 "superblock": true, 00:39:35.821 "num_base_bdevs": 2, 00:39:35.821 "num_base_bdevs_discovered": 2, 00:39:35.821 "num_base_bdevs_operational": 2, 00:39:35.821 "base_bdevs_list": [ 00:39:35.821 { 00:39:35.821 "name": "spare", 00:39:35.821 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:35.821 "is_configured": true, 00:39:35.821 "data_offset": 256, 00:39:35.821 "data_size": 7936 00:39:35.821 }, 00:39:35.821 { 00:39:35.821 "name": "BaseBdev2", 00:39:35.821 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:35.821 "is_configured": true, 00:39:35.821 "data_offset": 256, 00:39:35.821 "data_size": 7936 00:39:35.821 } 00:39:35.821 ] 00:39:35.821 }' 00:39:35.821 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:35.821 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:35.821 09:06:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:36.080 "name": "raid_bdev1", 00:39:36.080 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:36.080 "strip_size_kb": 0, 00:39:36.080 "state": "online", 00:39:36.080 "raid_level": "raid1", 00:39:36.080 "superblock": true, 00:39:36.080 "num_base_bdevs": 2, 00:39:36.080 "num_base_bdevs_discovered": 2, 00:39:36.080 "num_base_bdevs_operational": 2, 00:39:36.080 "base_bdevs_list": [ 00:39:36.080 { 00:39:36.080 "name": "spare", 00:39:36.080 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:36.080 "is_configured": true, 00:39:36.080 "data_offset": 256, 00:39:36.080 "data_size": 7936 00:39:36.080 }, 00:39:36.080 { 00:39:36.080 "name": "BaseBdev2", 00:39:36.080 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:36.080 "is_configured": true, 00:39:36.080 "data_offset": 256, 00:39:36.080 "data_size": 7936 00:39:36.080 } 00:39:36.080 ] 00:39:36.080 }' 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:36.080 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:37.015 09:06:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:37.274 [2024-07-12 09:06:12.228274] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:37.274 [2024-07-12 09:06:12.228500] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:37.274 [2024-07-12 09:06:12.228696] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:37.274 [2024-07-12 09:06:12.228875] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:37.274 [2024-07-12 09:06:12.229026] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:39:37.274 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:37.274 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:37.531 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:39:37.788 /dev/nbd0 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:37.788 1+0 records in 00:39:37.788 1+0 records out 00:39:37.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649638 s, 6.3 MB/s 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:37.788 09:06:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:39:38.045 /dev/nbd1 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:38.045 1+0 records in 00:39:38.045 1+0 records out 00:39:38.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434431 s, 9.4 MB/s 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:38.045 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:38.310 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:39:38.572 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:39:38.572 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:38.572 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:38.572 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:39:38.572 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:39:38.572 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:38.572 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:39:38.829 09:06:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:39.086 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:39.343 [2024-07-12 09:06:14.493253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:39.343 [2024-07-12 09:06:14.493536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:39.343 [2024-07-12 09:06:14.493706] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:39:39.343 [2024-07-12 09:06:14.493845] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:39.343 [2024-07-12 09:06:14.496521] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:39.343 [2024-07-12 09:06:14.496692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:39.343 [2024-07-12 09:06:14.496964] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:39.343 [2024-07-12 09:06:14.497133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:39.343 [2024-07-12 09:06:14.497402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:39.343 spare 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:39.343 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:39.601 [2024-07-12 09:06:14.597686] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:39:39.601 [2024-07-12 09:06:14.597881] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:39:39.601 [2024-07-12 09:06:14.598124] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:39:39.601 [2024-07-12 09:06:14.598669] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:39:39.601 [2024-07-12 09:06:14.598778] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:39:39.601 [2024-07-12 09:06:14.599035] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:39.601 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:39.601 "name": "raid_bdev1", 00:39:39.601 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:39.601 "strip_size_kb": 0, 00:39:39.601 "state": "online", 00:39:39.601 "raid_level": "raid1", 00:39:39.601 "superblock": true, 00:39:39.601 "num_base_bdevs": 2, 00:39:39.601 "num_base_bdevs_discovered": 2, 00:39:39.601 "num_base_bdevs_operational": 2, 00:39:39.601 "base_bdevs_list": [ 00:39:39.601 { 00:39:39.601 "name": "spare", 00:39:39.601 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:39.601 "is_configured": true, 00:39:39.601 "data_offset": 256, 00:39:39.601 "data_size": 7936 00:39:39.601 }, 00:39:39.601 { 00:39:39.601 "name": "BaseBdev2", 00:39:39.601 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:39.601 "is_configured": true, 00:39:39.601 "data_offset": 256, 00:39:39.601 "data_size": 7936 00:39:39.601 } 00:39:39.601 ] 00:39:39.601 }' 00:39:39.601 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:39.601 09:06:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:40.558 "name": "raid_bdev1", 00:39:40.558 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:40.558 "strip_size_kb": 0, 00:39:40.558 "state": "online", 00:39:40.558 "raid_level": "raid1", 00:39:40.558 "superblock": true, 00:39:40.558 "num_base_bdevs": 2, 00:39:40.558 "num_base_bdevs_discovered": 2, 00:39:40.558 "num_base_bdevs_operational": 2, 00:39:40.558 "base_bdevs_list": [ 00:39:40.558 { 00:39:40.558 "name": "spare", 00:39:40.558 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:40.558 "is_configured": true, 00:39:40.558 "data_offset": 256, 00:39:40.558 "data_size": 7936 00:39:40.558 }, 00:39:40.558 { 00:39:40.558 "name": "BaseBdev2", 00:39:40.558 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:40.558 "is_configured": true, 00:39:40.558 "data_offset": 256, 00:39:40.558 "data_size": 7936 00:39:40.558 } 00:39:40.558 ] 00:39:40.558 }' 00:39:40.558 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:40.816 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:40.816 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:40.816 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:40.816 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:40.816 09:06:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:41.074 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:39:41.074 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:39:41.332 [2024-07-12 09:06:16.366131] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:41.332 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:41.590 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:41.590 "name": "raid_bdev1", 00:39:41.590 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:41.590 "strip_size_kb": 0, 00:39:41.590 "state": "online", 00:39:41.590 "raid_level": "raid1", 00:39:41.590 "superblock": true, 00:39:41.590 "num_base_bdevs": 2, 00:39:41.590 "num_base_bdevs_discovered": 1, 00:39:41.590 "num_base_bdevs_operational": 1, 00:39:41.590 "base_bdevs_list": [ 00:39:41.590 { 00:39:41.590 "name": null, 00:39:41.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:41.590 "is_configured": false, 00:39:41.590 "data_offset": 256, 00:39:41.590 "data_size": 7936 00:39:41.590 }, 00:39:41.590 { 00:39:41.590 "name": "BaseBdev2", 00:39:41.590 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:41.590 "is_configured": true, 00:39:41.590 "data_offset": 256, 00:39:41.590 "data_size": 7936 00:39:41.590 } 00:39:41.590 ] 00:39:41.590 }' 00:39:41.590 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:41.590 09:06:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:42.155 09:06:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:39:42.722 [2024-07-12 09:06:17.610621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:42.722 [2024-07-12 09:06:17.611033] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:42.722 [2024-07-12 09:06:17.611161] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:42.722 [2024-07-12 09:06:17.611255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:42.722 [2024-07-12 09:06:17.624786] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:39:42.722 [2024-07-12 09:06:17.627005] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:42.722 09:06:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:39:43.657 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:43.657 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:43.657 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:43.657 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:43.657 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:43.657 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:43.657 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:43.916 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:43.916 "name": "raid_bdev1", 00:39:43.916 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:43.916 "strip_size_kb": 0, 00:39:43.916 "state": "online", 00:39:43.916 "raid_level": "raid1", 00:39:43.916 "superblock": true, 00:39:43.916 "num_base_bdevs": 2, 00:39:43.916 "num_base_bdevs_discovered": 2, 00:39:43.916 "num_base_bdevs_operational": 2, 00:39:43.916 "process": { 00:39:43.916 "type": "rebuild", 00:39:43.916 "target": "spare", 00:39:43.916 "progress": { 00:39:43.916 "blocks": 3072, 00:39:43.916 "percent": 38 00:39:43.916 } 00:39:43.916 }, 00:39:43.916 "base_bdevs_list": [ 00:39:43.916 { 00:39:43.916 "name": "spare", 00:39:43.916 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:43.916 "is_configured": true, 00:39:43.916 "data_offset": 256, 00:39:43.916 "data_size": 7936 00:39:43.916 }, 00:39:43.916 { 00:39:43.916 "name": "BaseBdev2", 00:39:43.916 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:43.916 "is_configured": true, 00:39:43.916 "data_offset": 256, 00:39:43.916 "data_size": 7936 00:39:43.916 } 00:39:43.916 ] 00:39:43.916 }' 00:39:43.916 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:43.916 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:43.916 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:43.916 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:43.916 09:06:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:44.174 [2024-07-12 09:06:19.245397] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:44.174 [2024-07-12 09:06:19.336863] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:44.174 [2024-07-12 09:06:19.337117] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:44.174 [2024-07-12 09:06:19.337170] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:44.174 [2024-07-12 09:06:19.337309] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:44.432 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:44.432 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:44.432 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:44.432 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:44.432 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:44.432 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:44.433 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:44.433 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:44.433 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:44.433 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:44.433 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:44.433 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:44.691 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:44.691 "name": "raid_bdev1", 00:39:44.691 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:44.691 "strip_size_kb": 0, 00:39:44.691 "state": "online", 00:39:44.691 "raid_level": "raid1", 00:39:44.691 "superblock": true, 00:39:44.691 "num_base_bdevs": 2, 00:39:44.691 "num_base_bdevs_discovered": 1, 00:39:44.691 "num_base_bdevs_operational": 1, 00:39:44.691 "base_bdevs_list": [ 00:39:44.691 { 00:39:44.691 "name": null, 00:39:44.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:44.691 "is_configured": false, 00:39:44.691 "data_offset": 256, 00:39:44.691 "data_size": 7936 00:39:44.691 }, 00:39:44.691 { 00:39:44.691 "name": "BaseBdev2", 00:39:44.691 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:44.691 "is_configured": true, 00:39:44.691 "data_offset": 256, 00:39:44.691 "data_size": 7936 00:39:44.691 } 00:39:44.691 ] 00:39:44.691 }' 00:39:44.691 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:44.692 09:06:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:45.259 09:06:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:45.517 [2024-07-12 09:06:20.563235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:45.517 [2024-07-12 09:06:20.563517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:45.517 [2024-07-12 09:06:20.563591] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:39:45.517 [2024-07-12 09:06:20.563840] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:45.517 [2024-07-12 09:06:20.564522] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:45.517 [2024-07-12 09:06:20.564701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:45.517 [2024-07-12 09:06:20.564961] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:45.517 [2024-07-12 09:06:20.565080] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:45.517 [2024-07-12 09:06:20.565192] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:45.517 [2024-07-12 09:06:20.565270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:45.517 [2024-07-12 09:06:20.579319] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:39:45.517 spare 00:39:45.517 [2024-07-12 09:06:20.581757] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:45.517 09:06:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:39:46.452 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:46.452 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:46.452 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:46.452 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:46.452 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:46.452 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:46.452 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:46.710 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:46.710 "name": "raid_bdev1", 00:39:46.710 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:46.710 "strip_size_kb": 0, 00:39:46.710 "state": "online", 00:39:46.710 "raid_level": "raid1", 00:39:46.710 "superblock": true, 00:39:46.710 "num_base_bdevs": 2, 00:39:46.710 "num_base_bdevs_discovered": 2, 00:39:46.710 "num_base_bdevs_operational": 2, 00:39:46.710 "process": { 00:39:46.710 "type": "rebuild", 00:39:46.710 "target": "spare", 00:39:46.710 "progress": { 00:39:46.710 "blocks": 3072, 00:39:46.710 "percent": 38 00:39:46.710 } 00:39:46.710 }, 00:39:46.710 "base_bdevs_list": [ 00:39:46.710 { 00:39:46.710 "name": "spare", 00:39:46.710 "uuid": "ae96e810-9b0e-5520-a79c-9703d2a1f6bd", 00:39:46.710 "is_configured": true, 00:39:46.710 "data_offset": 256, 00:39:46.710 "data_size": 7936 00:39:46.710 }, 00:39:46.710 { 00:39:46.710 "name": "BaseBdev2", 00:39:46.710 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:46.710 "is_configured": true, 00:39:46.710 "data_offset": 256, 00:39:46.710 "data_size": 7936 00:39:46.710 } 00:39:46.710 ] 00:39:46.710 }' 00:39:46.710 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:46.969 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:46.969 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:46.969 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:46.969 09:06:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:47.227 [2024-07-12 09:06:22.240190] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:47.227 [2024-07-12 09:06:22.292075] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:47.227 [2024-07-12 09:06:22.292325] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:47.227 [2024-07-12 09:06:22.292524] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:47.227 [2024-07-12 09:06:22.292566] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:47.227 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:47.486 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:47.486 "name": "raid_bdev1", 00:39:47.486 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:47.486 "strip_size_kb": 0, 00:39:47.486 "state": "online", 00:39:47.486 "raid_level": "raid1", 00:39:47.486 "superblock": true, 00:39:47.486 "num_base_bdevs": 2, 00:39:47.486 "num_base_bdevs_discovered": 1, 00:39:47.486 "num_base_bdevs_operational": 1, 00:39:47.486 "base_bdevs_list": [ 00:39:47.486 { 00:39:47.486 "name": null, 00:39:47.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:47.486 "is_configured": false, 00:39:47.486 "data_offset": 256, 00:39:47.486 "data_size": 7936 00:39:47.486 }, 00:39:47.486 { 00:39:47.486 "name": "BaseBdev2", 00:39:47.486 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:47.486 "is_configured": true, 00:39:47.486 "data_offset": 256, 00:39:47.486 "data_size": 7936 00:39:47.486 } 00:39:47.486 ] 00:39:47.486 }' 00:39:47.486 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:47.486 09:06:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:48.435 "name": "raid_bdev1", 00:39:48.435 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:48.435 "strip_size_kb": 0, 00:39:48.435 "state": "online", 00:39:48.435 "raid_level": "raid1", 00:39:48.435 "superblock": true, 00:39:48.435 "num_base_bdevs": 2, 00:39:48.435 "num_base_bdevs_discovered": 1, 00:39:48.435 "num_base_bdevs_operational": 1, 00:39:48.435 "base_bdevs_list": [ 00:39:48.435 { 00:39:48.435 "name": null, 00:39:48.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:48.435 "is_configured": false, 00:39:48.435 "data_offset": 256, 00:39:48.435 "data_size": 7936 00:39:48.435 }, 00:39:48.435 { 00:39:48.435 "name": "BaseBdev2", 00:39:48.435 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:48.435 "is_configured": true, 00:39:48.435 "data_offset": 256, 00:39:48.435 "data_size": 7936 00:39:48.435 } 00:39:48.435 ] 00:39:48.435 }' 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:48.435 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:48.694 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:48.694 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:39:48.952 09:06:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:49.210 [2024-07-12 09:06:24.192626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:49.210 [2024-07-12 09:06:24.192934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:49.210 [2024-07-12 09:06:24.193101] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:39:49.210 [2024-07-12 09:06:24.193242] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:49.210 [2024-07-12 09:06:24.193941] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:49.210 [2024-07-12 09:06:24.194091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:49.210 [2024-07-12 09:06:24.194354] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:49.210 [2024-07-12 09:06:24.194471] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:49.210 [2024-07-12 09:06:24.194570] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:49.210 BaseBdev1 00:39:49.210 09:06:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:50.151 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:50.420 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:50.420 "name": "raid_bdev1", 00:39:50.420 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:50.420 "strip_size_kb": 0, 00:39:50.420 "state": "online", 00:39:50.420 "raid_level": "raid1", 00:39:50.420 "superblock": true, 00:39:50.420 "num_base_bdevs": 2, 00:39:50.420 "num_base_bdevs_discovered": 1, 00:39:50.420 "num_base_bdevs_operational": 1, 00:39:50.420 "base_bdevs_list": [ 00:39:50.420 { 00:39:50.420 "name": null, 00:39:50.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:50.420 "is_configured": false, 00:39:50.420 "data_offset": 256, 00:39:50.420 "data_size": 7936 00:39:50.420 }, 00:39:50.420 { 00:39:50.420 "name": "BaseBdev2", 00:39:50.420 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:50.420 "is_configured": true, 00:39:50.420 "data_offset": 256, 00:39:50.420 "data_size": 7936 00:39:50.420 } 00:39:50.420 ] 00:39:50.420 }' 00:39:50.420 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:50.420 09:06:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:50.987 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:50.987 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:50.987 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:50.987 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:50.987 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:50.987 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:50.987 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:51.245 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:51.245 "name": "raid_bdev1", 00:39:51.245 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:51.245 "strip_size_kb": 0, 00:39:51.245 "state": "online", 00:39:51.245 "raid_level": "raid1", 00:39:51.245 "superblock": true, 00:39:51.245 "num_base_bdevs": 2, 00:39:51.245 "num_base_bdevs_discovered": 1, 00:39:51.245 "num_base_bdevs_operational": 1, 00:39:51.245 "base_bdevs_list": [ 00:39:51.245 { 00:39:51.245 "name": null, 00:39:51.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:51.245 "is_configured": false, 00:39:51.245 "data_offset": 256, 00:39:51.245 "data_size": 7936 00:39:51.245 }, 00:39:51.245 { 00:39:51.246 "name": "BaseBdev2", 00:39:51.246 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:51.246 "is_configured": true, 00:39:51.246 "data_offset": 256, 00:39:51.246 "data_size": 7936 00:39:51.246 } 00:39:51.246 ] 00:39:51.246 }' 00:39:51.246 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:51.504 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:51.762 [2024-07-12 09:06:26.801512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:51.762 [2024-07-12 09:06:26.801928] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:51.762 [2024-07-12 09:06:26.802045] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:51.762 request: 00:39:51.762 { 00:39:51.762 "base_bdev": "BaseBdev1", 00:39:51.762 "raid_bdev": "raid_bdev1", 00:39:51.762 "method": "bdev_raid_add_base_bdev", 00:39:51.762 "req_id": 1 00:39:51.762 } 00:39:51.762 Got JSON-RPC error response 00:39:51.762 response: 00:39:51.762 { 00:39:51.762 "code": -22, 00:39:51.762 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:51.762 } 00:39:51.762 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:39:51.762 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:51.762 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:51.762 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:51.762 09:06:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:39:52.696 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:52.696 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:52.697 09:06:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.955 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:52.955 "name": "raid_bdev1", 00:39:52.955 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:52.955 "strip_size_kb": 0, 00:39:52.955 "state": "online", 00:39:52.955 "raid_level": "raid1", 00:39:52.955 "superblock": true, 00:39:52.955 "num_base_bdevs": 2, 00:39:52.955 "num_base_bdevs_discovered": 1, 00:39:52.955 "num_base_bdevs_operational": 1, 00:39:52.955 "base_bdevs_list": [ 00:39:52.955 { 00:39:52.955 "name": null, 00:39:52.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:52.955 "is_configured": false, 00:39:52.955 "data_offset": 256, 00:39:52.955 "data_size": 7936 00:39:52.955 }, 00:39:52.955 { 00:39:52.955 "name": "BaseBdev2", 00:39:52.955 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:52.955 "is_configured": true, 00:39:52.955 "data_offset": 256, 00:39:52.955 "data_size": 7936 00:39:52.955 } 00:39:52.955 ] 00:39:52.955 }' 00:39:52.955 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:52.955 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:53.889 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:53.889 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:53.889 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:53.889 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:53.889 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:53.889 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:53.889 09:06:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:53.889 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:53.889 "name": "raid_bdev1", 00:39:53.889 "uuid": "20d3b1eb-a2de-4067-9e8b-349a3af6d0ac", 00:39:53.889 "strip_size_kb": 0, 00:39:53.889 "state": "online", 00:39:53.889 "raid_level": "raid1", 00:39:53.889 "superblock": true, 00:39:53.889 "num_base_bdevs": 2, 00:39:53.889 "num_base_bdevs_discovered": 1, 00:39:53.889 "num_base_bdevs_operational": 1, 00:39:53.889 "base_bdevs_list": [ 00:39:53.889 { 00:39:53.889 "name": null, 00:39:53.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:53.889 "is_configured": false, 00:39:53.889 "data_offset": 256, 00:39:53.889 "data_size": 7936 00:39:53.889 }, 00:39:53.889 { 00:39:53.889 "name": "BaseBdev2", 00:39:53.889 "uuid": "269ba080-ade8-5bab-a3d4-e2212521d8f2", 00:39:53.889 "is_configured": true, 00:39:53.889 "data_offset": 256, 00:39:53.889 "data_size": 7936 00:39:53.889 } 00:39:53.889 ] 00:39:53.889 }' 00:39:53.889 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 162871 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 162871 ']' 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 162871 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162871 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162871' 00:39:54.148 killing process with pid 162871 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 162871 00:39:54.148 Received shutdown signal, test time was about 60.000000 seconds 00:39:54.148 00:39:54.148 Latency(us) 00:39:54.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:54.148 =================================================================================================================== 00:39:54.148 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:54.148 09:06:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 162871 00:39:54.148 [2024-07-12 09:06:29.172594] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:54.148 [2024-07-12 09:06:29.172905] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:54.148 [2024-07-12 09:06:29.173053] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:54.148 [2024-07-12 09:06:29.173175] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:39:54.412 [2024-07-12 09:06:29.374496] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:55.352 09:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:39:55.352 00:39:55.352 real 0m34.583s 00:39:55.352 user 0m55.685s 00:39:55.352 sys 0m3.817s 00:39:55.352 09:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:55.352 09:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:39:55.352 ************************************ 00:39:55.352 END TEST raid_rebuild_test_sb_4k 00:39:55.352 ************************************ 00:39:55.352 09:06:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:39:55.352 09:06:30 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:39:55.352 09:06:30 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:39:55.352 09:06:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:39:55.352 09:06:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:55.352 09:06:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:55.352 ************************************ 00:39:55.352 START TEST raid_state_function_test_sb_md_separate 00:39:55.352 ************************************ 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=163828 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 163828' 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:39:55.352 Process raid pid: 163828 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 163828 /var/tmp/spdk-raid.sock 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 163828 ']' 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:55.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:55.352 09:06:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:55.352 [2024-07-12 09:06:30.508810] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:39:55.352 [2024-07-12 09:06:30.509323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:55.610 [2024-07-12 09:06:30.672835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.867 [2024-07-12 09:06:30.885250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.124 [2024-07-12 09:06:31.078523] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:56.381 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:56.381 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:39:56.381 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:39:56.638 [2024-07-12 09:06:31.669059] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:56.638 [2024-07-12 09:06:31.669338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:56.638 [2024-07-12 09:06:31.669441] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:56.638 [2024-07-12 09:06:31.669501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:56.638 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:56.895 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:56.895 "name": "Existed_Raid", 00:39:56.895 "uuid": "b70f1785-9809-42e2-9486-2c29a4e15b69", 00:39:56.895 "strip_size_kb": 0, 00:39:56.895 "state": "configuring", 00:39:56.895 "raid_level": "raid1", 00:39:56.895 "superblock": true, 00:39:56.895 "num_base_bdevs": 2, 00:39:56.895 "num_base_bdevs_discovered": 0, 00:39:56.895 "num_base_bdevs_operational": 2, 00:39:56.895 "base_bdevs_list": [ 00:39:56.895 { 00:39:56.895 "name": "BaseBdev1", 00:39:56.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.895 "is_configured": false, 00:39:56.895 "data_offset": 0, 00:39:56.895 "data_size": 0 00:39:56.895 }, 00:39:56.895 { 00:39:56.895 "name": "BaseBdev2", 00:39:56.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.895 "is_configured": false, 00:39:56.895 "data_offset": 0, 00:39:56.895 "data_size": 0 00:39:56.895 } 00:39:56.895 ] 00:39:56.895 }' 00:39:56.896 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:56.896 09:06:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:57.829 09:06:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:39:57.829 [2024-07-12 09:06:32.993281] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:57.829 [2024-07-12 09:06:32.993507] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:39:57.829 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:39:58.087 [2024-07-12 09:06:33.265362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:58.087 [2024-07-12 09:06:33.265647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:58.087 [2024-07-12 09:06:33.265783] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:58.087 [2024-07-12 09:06:33.265844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:58.087 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:39:58.346 [2024-07-12 09:06:33.524587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:58.346 BaseBdev1 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:58.604 09:06:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:58.862 [ 00:39:58.862 { 00:39:58.862 "name": "BaseBdev1", 00:39:58.862 "aliases": [ 00:39:58.862 "c05295a7-2d86-467c-8ac8-e0f647c4e8eb" 00:39:58.862 ], 00:39:58.862 "product_name": "Malloc disk", 00:39:58.862 "block_size": 4096, 00:39:58.862 "num_blocks": 8192, 00:39:58.862 "uuid": "c05295a7-2d86-467c-8ac8-e0f647c4e8eb", 00:39:58.862 "md_size": 32, 00:39:58.862 "md_interleave": false, 00:39:58.862 "dif_type": 0, 00:39:58.862 "assigned_rate_limits": { 00:39:58.862 "rw_ios_per_sec": 0, 00:39:58.862 "rw_mbytes_per_sec": 0, 00:39:58.862 "r_mbytes_per_sec": 0, 00:39:58.862 "w_mbytes_per_sec": 0 00:39:58.862 }, 00:39:58.862 "claimed": true, 00:39:58.862 "claim_type": "exclusive_write", 00:39:58.862 "zoned": false, 00:39:58.862 "supported_io_types": { 00:39:58.862 "read": true, 00:39:58.862 "write": true, 00:39:58.862 "unmap": true, 00:39:58.862 "flush": true, 00:39:58.862 "reset": true, 00:39:58.862 "nvme_admin": false, 00:39:58.862 "nvme_io": false, 00:39:58.862 "nvme_io_md": false, 00:39:58.862 "write_zeroes": true, 00:39:58.862 "zcopy": true, 00:39:58.862 "get_zone_info": false, 00:39:58.862 "zone_management": false, 00:39:58.862 "zone_append": false, 00:39:58.862 "compare": false, 00:39:58.863 "compare_and_write": false, 00:39:58.863 "abort": true, 00:39:58.863 "seek_hole": false, 00:39:58.863 "seek_data": false, 00:39:58.863 "copy": true, 00:39:58.863 "nvme_iov_md": false 00:39:58.863 }, 00:39:58.863 "memory_domains": [ 00:39:58.863 { 00:39:58.863 "dma_device_id": "system", 00:39:58.863 "dma_device_type": 1 00:39:58.863 }, 00:39:58.863 { 00:39:58.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:58.863 "dma_device_type": 2 00:39:58.863 } 00:39:58.863 ], 00:39:58.863 "driver_specific": {} 00:39:58.863 } 00:39:58.863 ] 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:58.863 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:59.121 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:59.121 "name": "Existed_Raid", 00:39:59.121 "uuid": "9bff55ed-353c-4781-ab76-e461ecdc2e00", 00:39:59.121 "strip_size_kb": 0, 00:39:59.121 "state": "configuring", 00:39:59.121 "raid_level": "raid1", 00:39:59.121 "superblock": true, 00:39:59.121 "num_base_bdevs": 2, 00:39:59.121 "num_base_bdevs_discovered": 1, 00:39:59.121 "num_base_bdevs_operational": 2, 00:39:59.121 "base_bdevs_list": [ 00:39:59.121 { 00:39:59.121 "name": "BaseBdev1", 00:39:59.121 "uuid": "c05295a7-2d86-467c-8ac8-e0f647c4e8eb", 00:39:59.121 "is_configured": true, 00:39:59.121 "data_offset": 256, 00:39:59.121 "data_size": 7936 00:39:59.121 }, 00:39:59.121 { 00:39:59.121 "name": "BaseBdev2", 00:39:59.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:59.121 "is_configured": false, 00:39:59.121 "data_offset": 0, 00:39:59.121 "data_size": 0 00:39:59.121 } 00:39:59.121 ] 00:39:59.121 }' 00:39:59.121 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:59.121 09:06:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:00.051 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:40:00.309 [2024-07-12 09:06:35.349221] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:00.309 [2024-07-12 09:06:35.349708] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:40:00.309 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:40:00.568 [2024-07-12 09:06:35.613317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:00.568 [2024-07-12 09:06:35.615603] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:00.568 [2024-07-12 09:06:35.615814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:00.568 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:00.825 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:00.825 "name": "Existed_Raid", 00:40:00.825 "uuid": "56b413d3-1fa1-47fe-8325-0f02335e116c", 00:40:00.825 "strip_size_kb": 0, 00:40:00.825 "state": "configuring", 00:40:00.825 "raid_level": "raid1", 00:40:00.825 "superblock": true, 00:40:00.825 "num_base_bdevs": 2, 00:40:00.825 "num_base_bdevs_discovered": 1, 00:40:00.825 "num_base_bdevs_operational": 2, 00:40:00.825 "base_bdevs_list": [ 00:40:00.825 { 00:40:00.825 "name": "BaseBdev1", 00:40:00.826 "uuid": "c05295a7-2d86-467c-8ac8-e0f647c4e8eb", 00:40:00.826 "is_configured": true, 00:40:00.826 "data_offset": 256, 00:40:00.826 "data_size": 7936 00:40:00.826 }, 00:40:00.826 { 00:40:00.826 "name": "BaseBdev2", 00:40:00.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:00.826 "is_configured": false, 00:40:00.826 "data_offset": 0, 00:40:00.826 "data_size": 0 00:40:00.826 } 00:40:00.826 ] 00:40:00.826 }' 00:40:00.826 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:00.826 09:06:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:01.391 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:40:01.649 [2024-07-12 09:06:36.786317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:01.649 [2024-07-12 09:06:36.786823] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:40:01.649 [2024-07-12 09:06:36.786953] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:01.649 [2024-07-12 09:06:36.787122] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:40:01.649 BaseBdev2 00:40:01.649 [2024-07-12 09:06:36.787357] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:40:01.649 [2024-07-12 09:06:36.787373] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:40:01.649 [2024-07-12 09:06:36.787497] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:01.649 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:40:01.649 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:40:01.649 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:40:01.649 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:40:01.650 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:40:01.650 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:40:01.650 09:06:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:40:01.907 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:02.165 [ 00:40:02.165 { 00:40:02.165 "name": "BaseBdev2", 00:40:02.165 "aliases": [ 00:40:02.165 "fefbd228-2f13-4848-ab1a-7688f7122e20" 00:40:02.165 ], 00:40:02.165 "product_name": "Malloc disk", 00:40:02.165 "block_size": 4096, 00:40:02.165 "num_blocks": 8192, 00:40:02.165 "uuid": "fefbd228-2f13-4848-ab1a-7688f7122e20", 00:40:02.165 "md_size": 32, 00:40:02.165 "md_interleave": false, 00:40:02.165 "dif_type": 0, 00:40:02.165 "assigned_rate_limits": { 00:40:02.165 "rw_ios_per_sec": 0, 00:40:02.165 "rw_mbytes_per_sec": 0, 00:40:02.165 "r_mbytes_per_sec": 0, 00:40:02.165 "w_mbytes_per_sec": 0 00:40:02.165 }, 00:40:02.165 "claimed": true, 00:40:02.165 "claim_type": "exclusive_write", 00:40:02.165 "zoned": false, 00:40:02.165 "supported_io_types": { 00:40:02.165 "read": true, 00:40:02.165 "write": true, 00:40:02.165 "unmap": true, 00:40:02.165 "flush": true, 00:40:02.165 "reset": true, 00:40:02.165 "nvme_admin": false, 00:40:02.165 "nvme_io": false, 00:40:02.165 "nvme_io_md": false, 00:40:02.165 "write_zeroes": true, 00:40:02.165 "zcopy": true, 00:40:02.165 "get_zone_info": false, 00:40:02.165 "zone_management": false, 00:40:02.165 "zone_append": false, 00:40:02.165 "compare": false, 00:40:02.165 "compare_and_write": false, 00:40:02.165 "abort": true, 00:40:02.165 "seek_hole": false, 00:40:02.165 "seek_data": false, 00:40:02.165 "copy": true, 00:40:02.165 "nvme_iov_md": false 00:40:02.165 }, 00:40:02.165 "memory_domains": [ 00:40:02.165 { 00:40:02.165 "dma_device_id": "system", 00:40:02.165 "dma_device_type": 1 00:40:02.165 }, 00:40:02.165 { 00:40:02.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:02.165 "dma_device_type": 2 00:40:02.165 } 00:40:02.165 ], 00:40:02.165 "driver_specific": {} 00:40:02.165 } 00:40:02.165 ] 00:40:02.165 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:40:02.165 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:40:02.165 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:40:02.165 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:02.166 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:02.424 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:02.424 "name": "Existed_Raid", 00:40:02.424 "uuid": "56b413d3-1fa1-47fe-8325-0f02335e116c", 00:40:02.424 "strip_size_kb": 0, 00:40:02.424 "state": "online", 00:40:02.424 "raid_level": "raid1", 00:40:02.424 "superblock": true, 00:40:02.424 "num_base_bdevs": 2, 00:40:02.424 "num_base_bdevs_discovered": 2, 00:40:02.424 "num_base_bdevs_operational": 2, 00:40:02.424 "base_bdevs_list": [ 00:40:02.424 { 00:40:02.424 "name": "BaseBdev1", 00:40:02.424 "uuid": "c05295a7-2d86-467c-8ac8-e0f647c4e8eb", 00:40:02.424 "is_configured": true, 00:40:02.424 "data_offset": 256, 00:40:02.424 "data_size": 7936 00:40:02.424 }, 00:40:02.424 { 00:40:02.424 "name": "BaseBdev2", 00:40:02.424 "uuid": "fefbd228-2f13-4848-ab1a-7688f7122e20", 00:40:02.424 "is_configured": true, 00:40:02.424 "data_offset": 256, 00:40:02.424 "data_size": 7936 00:40:02.424 } 00:40:02.424 ] 00:40:02.424 }' 00:40:02.424 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:02.424 09:06:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:40:02.989 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:40:03.246 [2024-07-12 09:06:38.399175] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:03.246 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:40:03.246 "name": "Existed_Raid", 00:40:03.246 "aliases": [ 00:40:03.246 "56b413d3-1fa1-47fe-8325-0f02335e116c" 00:40:03.246 ], 00:40:03.246 "product_name": "Raid Volume", 00:40:03.246 "block_size": 4096, 00:40:03.246 "num_blocks": 7936, 00:40:03.246 "uuid": "56b413d3-1fa1-47fe-8325-0f02335e116c", 00:40:03.246 "md_size": 32, 00:40:03.246 "md_interleave": false, 00:40:03.246 "dif_type": 0, 00:40:03.246 "assigned_rate_limits": { 00:40:03.246 "rw_ios_per_sec": 0, 00:40:03.246 "rw_mbytes_per_sec": 0, 00:40:03.246 "r_mbytes_per_sec": 0, 00:40:03.246 "w_mbytes_per_sec": 0 00:40:03.246 }, 00:40:03.246 "claimed": false, 00:40:03.246 "zoned": false, 00:40:03.246 "supported_io_types": { 00:40:03.246 "read": true, 00:40:03.246 "write": true, 00:40:03.246 "unmap": false, 00:40:03.246 "flush": false, 00:40:03.246 "reset": true, 00:40:03.246 "nvme_admin": false, 00:40:03.246 "nvme_io": false, 00:40:03.246 "nvme_io_md": false, 00:40:03.246 "write_zeroes": true, 00:40:03.246 "zcopy": false, 00:40:03.246 "get_zone_info": false, 00:40:03.246 "zone_management": false, 00:40:03.246 "zone_append": false, 00:40:03.246 "compare": false, 00:40:03.246 "compare_and_write": false, 00:40:03.246 "abort": false, 00:40:03.247 "seek_hole": false, 00:40:03.247 "seek_data": false, 00:40:03.247 "copy": false, 00:40:03.247 "nvme_iov_md": false 00:40:03.247 }, 00:40:03.247 "memory_domains": [ 00:40:03.247 { 00:40:03.247 "dma_device_id": "system", 00:40:03.247 "dma_device_type": 1 00:40:03.247 }, 00:40:03.247 { 00:40:03.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:03.247 "dma_device_type": 2 00:40:03.247 }, 00:40:03.247 { 00:40:03.247 "dma_device_id": "system", 00:40:03.247 "dma_device_type": 1 00:40:03.247 }, 00:40:03.247 { 00:40:03.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:03.247 "dma_device_type": 2 00:40:03.247 } 00:40:03.247 ], 00:40:03.247 "driver_specific": { 00:40:03.247 "raid": { 00:40:03.247 "uuid": "56b413d3-1fa1-47fe-8325-0f02335e116c", 00:40:03.247 "strip_size_kb": 0, 00:40:03.247 "state": "online", 00:40:03.247 "raid_level": "raid1", 00:40:03.247 "superblock": true, 00:40:03.247 "num_base_bdevs": 2, 00:40:03.247 "num_base_bdevs_discovered": 2, 00:40:03.247 "num_base_bdevs_operational": 2, 00:40:03.247 "base_bdevs_list": [ 00:40:03.247 { 00:40:03.247 "name": "BaseBdev1", 00:40:03.247 "uuid": "c05295a7-2d86-467c-8ac8-e0f647c4e8eb", 00:40:03.247 "is_configured": true, 00:40:03.247 "data_offset": 256, 00:40:03.247 "data_size": 7936 00:40:03.247 }, 00:40:03.247 { 00:40:03.247 "name": "BaseBdev2", 00:40:03.247 "uuid": "fefbd228-2f13-4848-ab1a-7688f7122e20", 00:40:03.247 "is_configured": true, 00:40:03.247 "data_offset": 256, 00:40:03.247 "data_size": 7936 00:40:03.247 } 00:40:03.247 ] 00:40:03.247 } 00:40:03.247 } 00:40:03.247 }' 00:40:03.247 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:03.505 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:40:03.505 BaseBdev2' 00:40:03.505 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:03.505 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:40:03.505 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:03.505 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:03.505 "name": "BaseBdev1", 00:40:03.505 "aliases": [ 00:40:03.505 "c05295a7-2d86-467c-8ac8-e0f647c4e8eb" 00:40:03.505 ], 00:40:03.505 "product_name": "Malloc disk", 00:40:03.505 "block_size": 4096, 00:40:03.505 "num_blocks": 8192, 00:40:03.505 "uuid": "c05295a7-2d86-467c-8ac8-e0f647c4e8eb", 00:40:03.505 "md_size": 32, 00:40:03.505 "md_interleave": false, 00:40:03.505 "dif_type": 0, 00:40:03.505 "assigned_rate_limits": { 00:40:03.505 "rw_ios_per_sec": 0, 00:40:03.505 "rw_mbytes_per_sec": 0, 00:40:03.505 "r_mbytes_per_sec": 0, 00:40:03.505 "w_mbytes_per_sec": 0 00:40:03.505 }, 00:40:03.505 "claimed": true, 00:40:03.505 "claim_type": "exclusive_write", 00:40:03.505 "zoned": false, 00:40:03.505 "supported_io_types": { 00:40:03.505 "read": true, 00:40:03.505 "write": true, 00:40:03.505 "unmap": true, 00:40:03.505 "flush": true, 00:40:03.505 "reset": true, 00:40:03.505 "nvme_admin": false, 00:40:03.505 "nvme_io": false, 00:40:03.505 "nvme_io_md": false, 00:40:03.505 "write_zeroes": true, 00:40:03.505 "zcopy": true, 00:40:03.505 "get_zone_info": false, 00:40:03.505 "zone_management": false, 00:40:03.505 "zone_append": false, 00:40:03.505 "compare": false, 00:40:03.505 "compare_and_write": false, 00:40:03.505 "abort": true, 00:40:03.505 "seek_hole": false, 00:40:03.505 "seek_data": false, 00:40:03.505 "copy": true, 00:40:03.505 "nvme_iov_md": false 00:40:03.505 }, 00:40:03.505 "memory_domains": [ 00:40:03.505 { 00:40:03.505 "dma_device_id": "system", 00:40:03.505 "dma_device_type": 1 00:40:03.505 }, 00:40:03.505 { 00:40:03.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:03.505 "dma_device_type": 2 00:40:03.505 } 00:40:03.505 ], 00:40:03.505 "driver_specific": {} 00:40:03.505 }' 00:40:03.505 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:03.763 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:03.763 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:03.763 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:03.763 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:03.763 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:03.763 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:04.020 09:06:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:04.020 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:04.020 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:04.020 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:04.020 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:04.020 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:04.020 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:40:04.020 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:04.278 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:04.278 "name": "BaseBdev2", 00:40:04.278 "aliases": [ 00:40:04.278 "fefbd228-2f13-4848-ab1a-7688f7122e20" 00:40:04.278 ], 00:40:04.278 "product_name": "Malloc disk", 00:40:04.278 "block_size": 4096, 00:40:04.278 "num_blocks": 8192, 00:40:04.278 "uuid": "fefbd228-2f13-4848-ab1a-7688f7122e20", 00:40:04.278 "md_size": 32, 00:40:04.278 "md_interleave": false, 00:40:04.278 "dif_type": 0, 00:40:04.278 "assigned_rate_limits": { 00:40:04.278 "rw_ios_per_sec": 0, 00:40:04.278 "rw_mbytes_per_sec": 0, 00:40:04.278 "r_mbytes_per_sec": 0, 00:40:04.278 "w_mbytes_per_sec": 0 00:40:04.278 }, 00:40:04.278 "claimed": true, 00:40:04.278 "claim_type": "exclusive_write", 00:40:04.278 "zoned": false, 00:40:04.278 "supported_io_types": { 00:40:04.278 "read": true, 00:40:04.278 "write": true, 00:40:04.278 "unmap": true, 00:40:04.278 "flush": true, 00:40:04.278 "reset": true, 00:40:04.278 "nvme_admin": false, 00:40:04.278 "nvme_io": false, 00:40:04.278 "nvme_io_md": false, 00:40:04.278 "write_zeroes": true, 00:40:04.278 "zcopy": true, 00:40:04.278 "get_zone_info": false, 00:40:04.278 "zone_management": false, 00:40:04.278 "zone_append": false, 00:40:04.278 "compare": false, 00:40:04.278 "compare_and_write": false, 00:40:04.278 "abort": true, 00:40:04.278 "seek_hole": false, 00:40:04.278 "seek_data": false, 00:40:04.278 "copy": true, 00:40:04.278 "nvme_iov_md": false 00:40:04.278 }, 00:40:04.278 "memory_domains": [ 00:40:04.278 { 00:40:04.278 "dma_device_id": "system", 00:40:04.278 "dma_device_type": 1 00:40:04.278 }, 00:40:04.278 { 00:40:04.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:04.278 "dma_device_type": 2 00:40:04.278 } 00:40:04.278 ], 00:40:04.278 "driver_specific": {} 00:40:04.278 }' 00:40:04.278 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:04.278 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:04.535 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:04.792 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:04.792 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:04.792 09:06:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:40:05.050 [2024-07-12 09:06:40.119538] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:05.050 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:05.051 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:05.051 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:05.051 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:05.051 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:05.051 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:05.309 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:05.309 "name": "Existed_Raid", 00:40:05.309 "uuid": "56b413d3-1fa1-47fe-8325-0f02335e116c", 00:40:05.309 "strip_size_kb": 0, 00:40:05.309 "state": "online", 00:40:05.309 "raid_level": "raid1", 00:40:05.309 "superblock": true, 00:40:05.309 "num_base_bdevs": 2, 00:40:05.309 "num_base_bdevs_discovered": 1, 00:40:05.309 "num_base_bdevs_operational": 1, 00:40:05.309 "base_bdevs_list": [ 00:40:05.309 { 00:40:05.309 "name": null, 00:40:05.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.309 "is_configured": false, 00:40:05.309 "data_offset": 256, 00:40:05.309 "data_size": 7936 00:40:05.309 }, 00:40:05.309 { 00:40:05.309 "name": "BaseBdev2", 00:40:05.309 "uuid": "fefbd228-2f13-4848-ab1a-7688f7122e20", 00:40:05.309 "is_configured": true, 00:40:05.309 "data_offset": 256, 00:40:05.309 "data_size": 7936 00:40:05.309 } 00:40:05.309 ] 00:40:05.309 }' 00:40:05.309 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:05.309 09:06:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:06.244 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:40:06.244 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:40:06.244 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:06.244 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:40:06.244 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:40:06.244 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:06.244 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:40:06.502 [2024-07-12 09:06:41.629541] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:06.502 [2024-07-12 09:06:41.629915] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:06.760 [2024-07-12 09:06:41.723093] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:06.760 [2024-07-12 09:06:41.723333] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:06.760 [2024-07-12 09:06:41.723471] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:40:06.760 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:40:06.760 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:40:06.760 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:06.760 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:40:07.025 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 163828 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 163828 ']' 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 163828 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163828 00:40:07.026 killing process with pid 163828 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163828' 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 163828 00:40:07.026 09:06:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 163828 00:40:07.026 [2024-07-12 09:06:41.977348] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:07.026 [2024-07-12 09:06:41.977490] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:07.965 ************************************ 00:40:07.965 END TEST raid_state_function_test_sb_md_separate 00:40:07.965 ************************************ 00:40:07.965 09:06:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:40:07.965 00:40:07.965 real 0m12.699s 00:40:07.965 user 0m22.666s 00:40:07.965 sys 0m1.385s 00:40:07.965 09:06:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:07.965 09:06:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:08.223 09:06:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:40:08.223 09:06:43 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:40:08.223 09:06:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:40:08.223 09:06:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:08.223 09:06:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:08.223 ************************************ 00:40:08.223 START TEST raid_superblock_test_md_separate 00:40:08.223 ************************************ 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:40:08.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=164234 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 164234 /var/tmp/spdk-raid.sock 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 164234 ']' 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:08.223 09:06:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:08.224 09:06:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:08.224 09:06:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:08.224 09:06:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:08.224 [2024-07-12 09:06:43.260696] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:40:08.224 [2024-07-12 09:06:43.261204] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164234 ] 00:40:08.482 [2024-07-12 09:06:43.434373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.739 [2024-07-12 09:06:43.679929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.739 [2024-07-12 09:06:43.887861] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:40:09.305 malloc1 00:40:09.305 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:09.871 [2024-07-12 09:06:44.796931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:09.871 [2024-07-12 09:06:44.797331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:09.871 [2024-07-12 09:06:44.797423] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:40:09.871 [2024-07-12 09:06:44.797648] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:09.871 [2024-07-12 09:06:44.800169] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:09.871 [2024-07-12 09:06:44.800379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:09.871 pt1 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:09.871 09:06:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:40:10.130 malloc2 00:40:10.130 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:10.389 [2024-07-12 09:06:45.403104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:10.389 [2024-07-12 09:06:45.403501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:10.389 [2024-07-12 09:06:45.403686] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:40:10.389 [2024-07-12 09:06:45.403815] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:10.389 [2024-07-12 09:06:45.406231] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:10.389 [2024-07-12 09:06:45.406417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:10.389 pt2 00:40:10.389 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:40:10.389 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:40:10.389 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:40:10.648 [2024-07-12 09:06:45.659424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:10.648 [2024-07-12 09:06:45.661930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:10.648 [2024-07-12 09:06:45.662344] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:40:10.648 [2024-07-12 09:06:45.662494] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:10.648 [2024-07-12 09:06:45.662725] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:40:10.648 [2024-07-12 09:06:45.662978] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:40:10.648 [2024-07-12 09:06:45.663082] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:40:10.648 [2024-07-12 09:06:45.663356] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:10.648 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:10.907 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:10.907 "name": "raid_bdev1", 00:40:10.907 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:10.907 "strip_size_kb": 0, 00:40:10.907 "state": "online", 00:40:10.907 "raid_level": "raid1", 00:40:10.907 "superblock": true, 00:40:10.907 "num_base_bdevs": 2, 00:40:10.907 "num_base_bdevs_discovered": 2, 00:40:10.907 "num_base_bdevs_operational": 2, 00:40:10.907 "base_bdevs_list": [ 00:40:10.907 { 00:40:10.907 "name": "pt1", 00:40:10.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:10.907 "is_configured": true, 00:40:10.907 "data_offset": 256, 00:40:10.907 "data_size": 7936 00:40:10.907 }, 00:40:10.907 { 00:40:10.907 "name": "pt2", 00:40:10.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:10.907 "is_configured": true, 00:40:10.907 "data_offset": 256, 00:40:10.907 "data_size": 7936 00:40:10.907 } 00:40:10.907 ] 00:40:10.907 }' 00:40:10.907 09:06:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:10.907 09:06:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:11.474 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:40:12.040 [2024-07-12 09:06:46.931995] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:12.040 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:40:12.040 "name": "raid_bdev1", 00:40:12.040 "aliases": [ 00:40:12.040 "70959594-97b7-4742-bf00-3fb012dc4203" 00:40:12.040 ], 00:40:12.040 "product_name": "Raid Volume", 00:40:12.040 "block_size": 4096, 00:40:12.040 "num_blocks": 7936, 00:40:12.040 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:12.040 "md_size": 32, 00:40:12.040 "md_interleave": false, 00:40:12.040 "dif_type": 0, 00:40:12.040 "assigned_rate_limits": { 00:40:12.040 "rw_ios_per_sec": 0, 00:40:12.040 "rw_mbytes_per_sec": 0, 00:40:12.040 "r_mbytes_per_sec": 0, 00:40:12.040 "w_mbytes_per_sec": 0 00:40:12.040 }, 00:40:12.040 "claimed": false, 00:40:12.040 "zoned": false, 00:40:12.040 "supported_io_types": { 00:40:12.040 "read": true, 00:40:12.040 "write": true, 00:40:12.040 "unmap": false, 00:40:12.040 "flush": false, 00:40:12.040 "reset": true, 00:40:12.040 "nvme_admin": false, 00:40:12.040 "nvme_io": false, 00:40:12.040 "nvme_io_md": false, 00:40:12.040 "write_zeroes": true, 00:40:12.040 "zcopy": false, 00:40:12.040 "get_zone_info": false, 00:40:12.040 "zone_management": false, 00:40:12.040 "zone_append": false, 00:40:12.040 "compare": false, 00:40:12.040 "compare_and_write": false, 00:40:12.040 "abort": false, 00:40:12.040 "seek_hole": false, 00:40:12.040 "seek_data": false, 00:40:12.040 "copy": false, 00:40:12.040 "nvme_iov_md": false 00:40:12.040 }, 00:40:12.040 "memory_domains": [ 00:40:12.040 { 00:40:12.040 "dma_device_id": "system", 00:40:12.040 "dma_device_type": 1 00:40:12.041 }, 00:40:12.041 { 00:40:12.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:12.041 "dma_device_type": 2 00:40:12.041 }, 00:40:12.041 { 00:40:12.041 "dma_device_id": "system", 00:40:12.041 "dma_device_type": 1 00:40:12.041 }, 00:40:12.041 { 00:40:12.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:12.041 "dma_device_type": 2 00:40:12.041 } 00:40:12.041 ], 00:40:12.041 "driver_specific": { 00:40:12.041 "raid": { 00:40:12.041 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:12.041 "strip_size_kb": 0, 00:40:12.041 "state": "online", 00:40:12.041 "raid_level": "raid1", 00:40:12.041 "superblock": true, 00:40:12.041 "num_base_bdevs": 2, 00:40:12.041 "num_base_bdevs_discovered": 2, 00:40:12.041 "num_base_bdevs_operational": 2, 00:40:12.041 "base_bdevs_list": [ 00:40:12.041 { 00:40:12.041 "name": "pt1", 00:40:12.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:12.041 "is_configured": true, 00:40:12.041 "data_offset": 256, 00:40:12.041 "data_size": 7936 00:40:12.041 }, 00:40:12.041 { 00:40:12.041 "name": "pt2", 00:40:12.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:12.041 "is_configured": true, 00:40:12.041 "data_offset": 256, 00:40:12.041 "data_size": 7936 00:40:12.041 } 00:40:12.041 ] 00:40:12.041 } 00:40:12.041 } 00:40:12.041 }' 00:40:12.041 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:12.041 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:40:12.041 pt2' 00:40:12.041 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:12.041 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:40:12.041 09:06:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:12.300 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:12.300 "name": "pt1", 00:40:12.300 "aliases": [ 00:40:12.300 "00000000-0000-0000-0000-000000000001" 00:40:12.300 ], 00:40:12.300 "product_name": "passthru", 00:40:12.300 "block_size": 4096, 00:40:12.300 "num_blocks": 8192, 00:40:12.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:12.300 "md_size": 32, 00:40:12.300 "md_interleave": false, 00:40:12.300 "dif_type": 0, 00:40:12.300 "assigned_rate_limits": { 00:40:12.300 "rw_ios_per_sec": 0, 00:40:12.300 "rw_mbytes_per_sec": 0, 00:40:12.300 "r_mbytes_per_sec": 0, 00:40:12.300 "w_mbytes_per_sec": 0 00:40:12.300 }, 00:40:12.300 "claimed": true, 00:40:12.300 "claim_type": "exclusive_write", 00:40:12.300 "zoned": false, 00:40:12.300 "supported_io_types": { 00:40:12.300 "read": true, 00:40:12.300 "write": true, 00:40:12.300 "unmap": true, 00:40:12.300 "flush": true, 00:40:12.300 "reset": true, 00:40:12.300 "nvme_admin": false, 00:40:12.300 "nvme_io": false, 00:40:12.300 "nvme_io_md": false, 00:40:12.300 "write_zeroes": true, 00:40:12.300 "zcopy": true, 00:40:12.300 "get_zone_info": false, 00:40:12.300 "zone_management": false, 00:40:12.300 "zone_append": false, 00:40:12.300 "compare": false, 00:40:12.300 "compare_and_write": false, 00:40:12.300 "abort": true, 00:40:12.300 "seek_hole": false, 00:40:12.300 "seek_data": false, 00:40:12.300 "copy": true, 00:40:12.300 "nvme_iov_md": false 00:40:12.300 }, 00:40:12.300 "memory_domains": [ 00:40:12.300 { 00:40:12.300 "dma_device_id": "system", 00:40:12.300 "dma_device_type": 1 00:40:12.300 }, 00:40:12.300 { 00:40:12.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:12.300 "dma_device_type": 2 00:40:12.300 } 00:40:12.300 ], 00:40:12.300 "driver_specific": { 00:40:12.300 "passthru": { 00:40:12.300 "name": "pt1", 00:40:12.300 "base_bdev_name": "malloc1" 00:40:12.300 } 00:40:12.300 } 00:40:12.300 }' 00:40:12.300 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:12.300 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:12.300 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:12.300 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:12.300 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:12.300 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:12.558 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:12.558 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:12.558 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:12.558 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:12.558 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:12.558 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:12.558 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:12.816 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:40:12.816 09:06:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:13.075 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:13.075 "name": "pt2", 00:40:13.075 "aliases": [ 00:40:13.075 "00000000-0000-0000-0000-000000000002" 00:40:13.075 ], 00:40:13.075 "product_name": "passthru", 00:40:13.075 "block_size": 4096, 00:40:13.075 "num_blocks": 8192, 00:40:13.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:13.075 "md_size": 32, 00:40:13.075 "md_interleave": false, 00:40:13.075 "dif_type": 0, 00:40:13.075 "assigned_rate_limits": { 00:40:13.075 "rw_ios_per_sec": 0, 00:40:13.075 "rw_mbytes_per_sec": 0, 00:40:13.075 "r_mbytes_per_sec": 0, 00:40:13.075 "w_mbytes_per_sec": 0 00:40:13.075 }, 00:40:13.075 "claimed": true, 00:40:13.075 "claim_type": "exclusive_write", 00:40:13.075 "zoned": false, 00:40:13.075 "supported_io_types": { 00:40:13.075 "read": true, 00:40:13.075 "write": true, 00:40:13.075 "unmap": true, 00:40:13.075 "flush": true, 00:40:13.075 "reset": true, 00:40:13.075 "nvme_admin": false, 00:40:13.075 "nvme_io": false, 00:40:13.075 "nvme_io_md": false, 00:40:13.075 "write_zeroes": true, 00:40:13.075 "zcopy": true, 00:40:13.075 "get_zone_info": false, 00:40:13.075 "zone_management": false, 00:40:13.075 "zone_append": false, 00:40:13.075 "compare": false, 00:40:13.075 "compare_and_write": false, 00:40:13.075 "abort": true, 00:40:13.075 "seek_hole": false, 00:40:13.075 "seek_data": false, 00:40:13.075 "copy": true, 00:40:13.075 "nvme_iov_md": false 00:40:13.075 }, 00:40:13.075 "memory_domains": [ 00:40:13.075 { 00:40:13.075 "dma_device_id": "system", 00:40:13.075 "dma_device_type": 1 00:40:13.075 }, 00:40:13.075 { 00:40:13.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:13.075 "dma_device_type": 2 00:40:13.075 } 00:40:13.075 ], 00:40:13.075 "driver_specific": { 00:40:13.075 "passthru": { 00:40:13.075 "name": "pt2", 00:40:13.075 "base_bdev_name": "malloc2" 00:40:13.075 } 00:40:13.075 } 00:40:13.075 }' 00:40:13.075 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:13.075 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:13.075 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:13.075 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:13.075 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:13.333 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:13.333 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:13.333 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:13.333 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:13.333 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:13.333 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:13.591 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:13.591 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:40:13.591 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:13.849 [2024-07-12 09:06:48.804484] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:13.849 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=70959594-97b7-4742-bf00-3fb012dc4203 00:40:13.849 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 70959594-97b7-4742-bf00-3fb012dc4203 ']' 00:40:13.849 09:06:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:14.106 [2024-07-12 09:06:49.100255] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:14.106 [2024-07-12 09:06:49.100454] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:14.106 [2024-07-12 09:06:49.100647] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:14.106 [2024-07-12 09:06:49.100825] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:14.106 [2024-07-12 09:06:49.100929] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:40:14.106 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:14.106 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:40:14.364 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:40:14.364 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:40:14.364 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:40:14.364 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:40:14.364 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:40:14.364 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:14.622 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:40:14.622 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:14.880 09:06:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:40:15.138 [2024-07-12 09:06:50.168530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:40:15.138 [2024-07-12 09:06:50.170943] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:40:15.138 [2024-07-12 09:06:50.171157] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:40:15.138 [2024-07-12 09:06:50.171401] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:40:15.138 [2024-07-12 09:06:50.171482] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:15.138 [2024-07-12 09:06:50.171624] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:40:15.138 request: 00:40:15.138 { 00:40:15.138 "name": "raid_bdev1", 00:40:15.138 "raid_level": "raid1", 00:40:15.138 "base_bdevs": [ 00:40:15.138 "malloc1", 00:40:15.138 "malloc2" 00:40:15.138 ], 00:40:15.138 "superblock": false, 00:40:15.138 "method": "bdev_raid_create", 00:40:15.138 "req_id": 1 00:40:15.138 } 00:40:15.138 Got JSON-RPC error response 00:40:15.138 response: 00:40:15.138 { 00:40:15.138 "code": -17, 00:40:15.138 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:40:15.138 } 00:40:15.138 09:06:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:40:15.138 09:06:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:15.138 09:06:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:15.138 09:06:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:15.138 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:15.138 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:40:15.396 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:40:15.396 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:40:15.396 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:15.655 [2024-07-12 09:06:50.636622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:15.655 [2024-07-12 09:06:50.636912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:15.655 [2024-07-12 09:06:50.637053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:40:15.655 [2024-07-12 09:06:50.637191] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:15.655 [2024-07-12 09:06:50.639489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:15.655 [2024-07-12 09:06:50.639686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:15.655 [2024-07-12 09:06:50.639932] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:40:15.655 [2024-07-12 09:06:50.640087] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:15.655 pt1 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:15.655 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:15.914 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:15.914 "name": "raid_bdev1", 00:40:15.914 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:15.914 "strip_size_kb": 0, 00:40:15.914 "state": "configuring", 00:40:15.914 "raid_level": "raid1", 00:40:15.914 "superblock": true, 00:40:15.914 "num_base_bdevs": 2, 00:40:15.914 "num_base_bdevs_discovered": 1, 00:40:15.914 "num_base_bdevs_operational": 2, 00:40:15.914 "base_bdevs_list": [ 00:40:15.914 { 00:40:15.914 "name": "pt1", 00:40:15.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:15.914 "is_configured": true, 00:40:15.914 "data_offset": 256, 00:40:15.914 "data_size": 7936 00:40:15.914 }, 00:40:15.914 { 00:40:15.914 "name": null, 00:40:15.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:15.914 "is_configured": false, 00:40:15.914 "data_offset": 256, 00:40:15.914 "data_size": 7936 00:40:15.914 } 00:40:15.914 ] 00:40:15.914 }' 00:40:15.914 09:06:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:15.914 09:06:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:16.480 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:40:16.480 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:40:16.480 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:40:16.480 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:16.737 [2024-07-12 09:06:51.869018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:16.737 [2024-07-12 09:06:51.869308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:16.737 [2024-07-12 09:06:51.869448] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:40:16.737 [2024-07-12 09:06:51.869574] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:16.737 [2024-07-12 09:06:51.869974] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:16.737 [2024-07-12 09:06:51.870133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:16.737 [2024-07-12 09:06:51.870338] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:16.737 [2024-07-12 09:06:51.870458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:16.737 [2024-07-12 09:06:51.870668] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:40:16.737 [2024-07-12 09:06:51.870768] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:16.737 [2024-07-12 09:06:51.870906] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:40:16.737 [2024-07-12 09:06:51.871122] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:40:16.737 [2024-07-12 09:06:51.871221] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:40:16.737 [2024-07-12 09:06:51.871419] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:16.737 pt2 00:40:16.737 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:40:16.737 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:40:16.737 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:16.737 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:16.737 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:16.737 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:16.737 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:16.738 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:16.738 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:16.738 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:16.738 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:16.738 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:16.738 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:16.738 09:06:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:16.995 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:16.995 "name": "raid_bdev1", 00:40:16.995 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:16.995 "strip_size_kb": 0, 00:40:16.995 "state": "online", 00:40:16.995 "raid_level": "raid1", 00:40:16.995 "superblock": true, 00:40:16.995 "num_base_bdevs": 2, 00:40:16.995 "num_base_bdevs_discovered": 2, 00:40:16.995 "num_base_bdevs_operational": 2, 00:40:16.995 "base_bdevs_list": [ 00:40:16.995 { 00:40:16.995 "name": "pt1", 00:40:16.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:16.995 "is_configured": true, 00:40:16.995 "data_offset": 256, 00:40:16.995 "data_size": 7936 00:40:16.995 }, 00:40:16.995 { 00:40:16.995 "name": "pt2", 00:40:16.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:16.995 "is_configured": true, 00:40:16.995 "data_offset": 256, 00:40:16.995 "data_size": 7936 00:40:16.995 } 00:40:16.995 ] 00:40:16.995 }' 00:40:16.995 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:16.995 09:06:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:17.929 09:06:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:40:18.186 [2024-07-12 09:06:53.133862] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:18.186 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:40:18.186 "name": "raid_bdev1", 00:40:18.186 "aliases": [ 00:40:18.186 "70959594-97b7-4742-bf00-3fb012dc4203" 00:40:18.186 ], 00:40:18.186 "product_name": "Raid Volume", 00:40:18.186 "block_size": 4096, 00:40:18.186 "num_blocks": 7936, 00:40:18.186 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:18.186 "md_size": 32, 00:40:18.186 "md_interleave": false, 00:40:18.186 "dif_type": 0, 00:40:18.186 "assigned_rate_limits": { 00:40:18.186 "rw_ios_per_sec": 0, 00:40:18.186 "rw_mbytes_per_sec": 0, 00:40:18.186 "r_mbytes_per_sec": 0, 00:40:18.186 "w_mbytes_per_sec": 0 00:40:18.186 }, 00:40:18.186 "claimed": false, 00:40:18.186 "zoned": false, 00:40:18.186 "supported_io_types": { 00:40:18.186 "read": true, 00:40:18.186 "write": true, 00:40:18.186 "unmap": false, 00:40:18.186 "flush": false, 00:40:18.186 "reset": true, 00:40:18.186 "nvme_admin": false, 00:40:18.186 "nvme_io": false, 00:40:18.186 "nvme_io_md": false, 00:40:18.186 "write_zeroes": true, 00:40:18.186 "zcopy": false, 00:40:18.186 "get_zone_info": false, 00:40:18.186 "zone_management": false, 00:40:18.186 "zone_append": false, 00:40:18.186 "compare": false, 00:40:18.186 "compare_and_write": false, 00:40:18.186 "abort": false, 00:40:18.186 "seek_hole": false, 00:40:18.186 "seek_data": false, 00:40:18.186 "copy": false, 00:40:18.186 "nvme_iov_md": false 00:40:18.186 }, 00:40:18.186 "memory_domains": [ 00:40:18.186 { 00:40:18.186 "dma_device_id": "system", 00:40:18.186 "dma_device_type": 1 00:40:18.186 }, 00:40:18.186 { 00:40:18.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:18.186 "dma_device_type": 2 00:40:18.186 }, 00:40:18.186 { 00:40:18.186 "dma_device_id": "system", 00:40:18.186 "dma_device_type": 1 00:40:18.186 }, 00:40:18.186 { 00:40:18.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:18.186 "dma_device_type": 2 00:40:18.186 } 00:40:18.186 ], 00:40:18.186 "driver_specific": { 00:40:18.186 "raid": { 00:40:18.186 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:18.186 "strip_size_kb": 0, 00:40:18.186 "state": "online", 00:40:18.186 "raid_level": "raid1", 00:40:18.186 "superblock": true, 00:40:18.186 "num_base_bdevs": 2, 00:40:18.186 "num_base_bdevs_discovered": 2, 00:40:18.186 "num_base_bdevs_operational": 2, 00:40:18.186 "base_bdevs_list": [ 00:40:18.186 { 00:40:18.186 "name": "pt1", 00:40:18.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:18.186 "is_configured": true, 00:40:18.186 "data_offset": 256, 00:40:18.186 "data_size": 7936 00:40:18.186 }, 00:40:18.186 { 00:40:18.186 "name": "pt2", 00:40:18.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:18.186 "is_configured": true, 00:40:18.186 "data_offset": 256, 00:40:18.186 "data_size": 7936 00:40:18.186 } 00:40:18.186 ] 00:40:18.186 } 00:40:18.186 } 00:40:18.186 }' 00:40:18.186 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:18.186 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:40:18.186 pt2' 00:40:18.186 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:18.186 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:40:18.186 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:18.457 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:18.457 "name": "pt1", 00:40:18.457 "aliases": [ 00:40:18.457 "00000000-0000-0000-0000-000000000001" 00:40:18.457 ], 00:40:18.457 "product_name": "passthru", 00:40:18.457 "block_size": 4096, 00:40:18.457 "num_blocks": 8192, 00:40:18.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:18.457 "md_size": 32, 00:40:18.457 "md_interleave": false, 00:40:18.457 "dif_type": 0, 00:40:18.457 "assigned_rate_limits": { 00:40:18.457 "rw_ios_per_sec": 0, 00:40:18.457 "rw_mbytes_per_sec": 0, 00:40:18.457 "r_mbytes_per_sec": 0, 00:40:18.457 "w_mbytes_per_sec": 0 00:40:18.457 }, 00:40:18.457 "claimed": true, 00:40:18.457 "claim_type": "exclusive_write", 00:40:18.457 "zoned": false, 00:40:18.457 "supported_io_types": { 00:40:18.457 "read": true, 00:40:18.457 "write": true, 00:40:18.457 "unmap": true, 00:40:18.457 "flush": true, 00:40:18.457 "reset": true, 00:40:18.457 "nvme_admin": false, 00:40:18.457 "nvme_io": false, 00:40:18.457 "nvme_io_md": false, 00:40:18.457 "write_zeroes": true, 00:40:18.457 "zcopy": true, 00:40:18.457 "get_zone_info": false, 00:40:18.457 "zone_management": false, 00:40:18.457 "zone_append": false, 00:40:18.457 "compare": false, 00:40:18.457 "compare_and_write": false, 00:40:18.457 "abort": true, 00:40:18.457 "seek_hole": false, 00:40:18.457 "seek_data": false, 00:40:18.457 "copy": true, 00:40:18.457 "nvme_iov_md": false 00:40:18.457 }, 00:40:18.457 "memory_domains": [ 00:40:18.457 { 00:40:18.457 "dma_device_id": "system", 00:40:18.457 "dma_device_type": 1 00:40:18.457 }, 00:40:18.457 { 00:40:18.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:18.457 "dma_device_type": 2 00:40:18.457 } 00:40:18.457 ], 00:40:18.457 "driver_specific": { 00:40:18.457 "passthru": { 00:40:18.457 "name": "pt1", 00:40:18.457 "base_bdev_name": "malloc1" 00:40:18.457 } 00:40:18.457 } 00:40:18.457 }' 00:40:18.457 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:18.457 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:18.457 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:18.457 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:18.457 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:18.726 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:18.726 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:18.726 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:18.726 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:18.726 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:18.726 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:18.983 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:18.983 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:40:18.983 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:40:18.983 09:06:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:40:19.241 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:40:19.241 "name": "pt2", 00:40:19.241 "aliases": [ 00:40:19.241 "00000000-0000-0000-0000-000000000002" 00:40:19.241 ], 00:40:19.241 "product_name": "passthru", 00:40:19.241 "block_size": 4096, 00:40:19.241 "num_blocks": 8192, 00:40:19.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:19.241 "md_size": 32, 00:40:19.241 "md_interleave": false, 00:40:19.241 "dif_type": 0, 00:40:19.241 "assigned_rate_limits": { 00:40:19.241 "rw_ios_per_sec": 0, 00:40:19.241 "rw_mbytes_per_sec": 0, 00:40:19.241 "r_mbytes_per_sec": 0, 00:40:19.241 "w_mbytes_per_sec": 0 00:40:19.241 }, 00:40:19.241 "claimed": true, 00:40:19.241 "claim_type": "exclusive_write", 00:40:19.241 "zoned": false, 00:40:19.241 "supported_io_types": { 00:40:19.241 "read": true, 00:40:19.241 "write": true, 00:40:19.241 "unmap": true, 00:40:19.241 "flush": true, 00:40:19.241 "reset": true, 00:40:19.241 "nvme_admin": false, 00:40:19.241 "nvme_io": false, 00:40:19.241 "nvme_io_md": false, 00:40:19.241 "write_zeroes": true, 00:40:19.241 "zcopy": true, 00:40:19.241 "get_zone_info": false, 00:40:19.241 "zone_management": false, 00:40:19.241 "zone_append": false, 00:40:19.241 "compare": false, 00:40:19.241 "compare_and_write": false, 00:40:19.241 "abort": true, 00:40:19.241 "seek_hole": false, 00:40:19.241 "seek_data": false, 00:40:19.241 "copy": true, 00:40:19.241 "nvme_iov_md": false 00:40:19.241 }, 00:40:19.241 "memory_domains": [ 00:40:19.241 { 00:40:19.241 "dma_device_id": "system", 00:40:19.241 "dma_device_type": 1 00:40:19.241 }, 00:40:19.241 { 00:40:19.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:19.241 "dma_device_type": 2 00:40:19.241 } 00:40:19.241 ], 00:40:19.241 "driver_specific": { 00:40:19.241 "passthru": { 00:40:19.241 "name": "pt2", 00:40:19.241 "base_bdev_name": "malloc2" 00:40:19.241 } 00:40:19.241 } 00:40:19.241 }' 00:40:19.241 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:19.241 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:40:19.241 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:40:19.241 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:19.241 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:40:19.500 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:40:19.500 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:19.500 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:40:19.500 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:40:19.500 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:19.500 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:40:19.759 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:40:19.759 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:19.759 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:40:19.759 [2024-07-12 09:06:54.942310] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:20.017 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 70959594-97b7-4742-bf00-3fb012dc4203 '!=' 70959594-97b7-4742-bf00-3fb012dc4203 ']' 00:40:20.017 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:40:20.017 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:40:20.017 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:40:20.017 09:06:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:40:20.017 [2024-07-12 09:06:55.198171] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:20.275 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:20.533 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:20.533 "name": "raid_bdev1", 00:40:20.533 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:20.533 "strip_size_kb": 0, 00:40:20.533 "state": "online", 00:40:20.533 "raid_level": "raid1", 00:40:20.533 "superblock": true, 00:40:20.533 "num_base_bdevs": 2, 00:40:20.533 "num_base_bdevs_discovered": 1, 00:40:20.533 "num_base_bdevs_operational": 1, 00:40:20.533 "base_bdevs_list": [ 00:40:20.533 { 00:40:20.533 "name": null, 00:40:20.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:20.533 "is_configured": false, 00:40:20.533 "data_offset": 256, 00:40:20.533 "data_size": 7936 00:40:20.533 }, 00:40:20.533 { 00:40:20.533 "name": "pt2", 00:40:20.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:20.533 "is_configured": true, 00:40:20.533 "data_offset": 256, 00:40:20.533 "data_size": 7936 00:40:20.533 } 00:40:20.533 ] 00:40:20.533 }' 00:40:20.533 09:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:20.533 09:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:21.099 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:21.357 [2024-07-12 09:06:56.526493] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:21.357 [2024-07-12 09:06:56.526712] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:21.357 [2024-07-12 09:06:56.526885] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:21.357 [2024-07-12 09:06:56.527032] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:21.357 [2024-07-12 09:06:56.527131] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:40:21.357 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:21.357 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:40:21.922 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:40:21.922 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:40:21.922 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:40:21.922 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:40:21.922 09:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:40:21.922 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:40:21.922 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:40:21.922 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:40:21.922 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:40:21.922 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:40:21.922 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:22.180 [2024-07-12 09:06:57.338776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:22.180 [2024-07-12 09:06:57.339150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:22.180 [2024-07-12 09:06:57.339217] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:40:22.180 [2024-07-12 09:06:57.339475] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:22.180 [2024-07-12 09:06:57.342036] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:22.180 [2024-07-12 09:06:57.342261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:22.180 [2024-07-12 09:06:57.342487] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:22.180 [2024-07-12 09:06:57.342658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:22.180 [2024-07-12 09:06:57.342844] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:40:22.180 [2024-07-12 09:06:57.342968] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:22.180 [2024-07-12 09:06:57.343110] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:40:22.180 [2024-07-12 09:06:57.343349] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:40:22.180 [2024-07-12 09:06:57.343448] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:40:22.180 [2024-07-12 09:06:57.343669] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:22.180 pt2 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:22.180 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:22.746 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:22.746 "name": "raid_bdev1", 00:40:22.746 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:22.746 "strip_size_kb": 0, 00:40:22.746 "state": "online", 00:40:22.746 "raid_level": "raid1", 00:40:22.746 "superblock": true, 00:40:22.746 "num_base_bdevs": 2, 00:40:22.746 "num_base_bdevs_discovered": 1, 00:40:22.746 "num_base_bdevs_operational": 1, 00:40:22.746 "base_bdevs_list": [ 00:40:22.746 { 00:40:22.746 "name": null, 00:40:22.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.746 "is_configured": false, 00:40:22.746 "data_offset": 256, 00:40:22.746 "data_size": 7936 00:40:22.746 }, 00:40:22.746 { 00:40:22.746 "name": "pt2", 00:40:22.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:22.746 "is_configured": true, 00:40:22.746 "data_offset": 256, 00:40:22.746 "data_size": 7936 00:40:22.746 } 00:40:22.746 ] 00:40:22.746 }' 00:40:22.746 09:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:22.746 09:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:23.312 09:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:23.569 [2024-07-12 09:06:58.651329] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:23.569 [2024-07-12 09:06:58.651586] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:23.569 [2024-07-12 09:06:58.651763] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:23.569 [2024-07-12 09:06:58.651956] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:23.569 [2024-07-12 09:06:58.652069] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:40:23.569 09:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:40:23.569 09:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:23.826 09:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:40:23.826 09:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:40:23.826 09:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:40:23.826 09:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:24.083 [2024-07-12 09:06:59.223479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:24.083 [2024-07-12 09:06:59.223826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:24.083 [2024-07-12 09:06:59.223920] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:40:24.083 [2024-07-12 09:06:59.224155] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:24.083 [2024-07-12 09:06:59.226624] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:24.084 [2024-07-12 09:06:59.226801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:24.084 [2024-07-12 09:06:59.227021] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:40:24.084 [2024-07-12 09:06:59.227177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:24.084 [2024-07-12 09:06:59.227392] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:40:24.084 [2024-07-12 09:06:59.227505] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:24.084 [2024-07-12 09:06:59.227587] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:40:24.084 [2024-07-12 09:06:59.227841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:24.084 [2024-07-12 09:06:59.228130] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:40:24.084 [2024-07-12 09:06:59.228235] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:24.084 pt1 00:40:24.084 [2024-07-12 09:06:59.228407] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:24.084 [2024-07-12 09:06:59.228531] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:40:24.084 [2024-07-12 09:06:59.228545] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:40:24.084 [2024-07-12 09:06:59.228673] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:24.084 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:24.371 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:24.371 "name": "raid_bdev1", 00:40:24.371 "uuid": "70959594-97b7-4742-bf00-3fb012dc4203", 00:40:24.371 "strip_size_kb": 0, 00:40:24.371 "state": "online", 00:40:24.371 "raid_level": "raid1", 00:40:24.371 "superblock": true, 00:40:24.371 "num_base_bdevs": 2, 00:40:24.371 "num_base_bdevs_discovered": 1, 00:40:24.371 "num_base_bdevs_operational": 1, 00:40:24.371 "base_bdevs_list": [ 00:40:24.371 { 00:40:24.371 "name": null, 00:40:24.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:24.371 "is_configured": false, 00:40:24.371 "data_offset": 256, 00:40:24.371 "data_size": 7936 00:40:24.371 }, 00:40:24.371 { 00:40:24.371 "name": "pt2", 00:40:24.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:24.371 "is_configured": true, 00:40:24.371 "data_offset": 256, 00:40:24.371 "data_size": 7936 00:40:24.371 } 00:40:24.371 ] 00:40:24.371 }' 00:40:24.371 09:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:24.371 09:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:25.303 09:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:40:25.303 09:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:40:25.561 09:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:40:25.561 09:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:25.561 09:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:40:25.818 [2024-07-12 09:07:00.780225] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 70959594-97b7-4742-bf00-3fb012dc4203 '!=' 70959594-97b7-4742-bf00-3fb012dc4203 ']' 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 164234 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 164234 ']' 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 164234 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164234 00:40:25.818 killing process with pid 164234 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164234' 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 164234 00:40:25.818 09:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 164234 00:40:25.818 [2024-07-12 09:07:00.822886] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:25.818 [2024-07-12 09:07:00.822984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:25.818 [2024-07-12 09:07:00.823042] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:25.818 [2024-07-12 09:07:00.823088] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:40:25.818 [2024-07-12 09:07:01.004539] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:27.187 ************************************ 00:40:27.187 END TEST raid_superblock_test_md_separate 00:40:27.187 ************************************ 00:40:27.187 09:07:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:40:27.187 00:40:27.187 real 0m18.925s 00:40:27.187 user 0m35.117s 00:40:27.187 sys 0m2.014s 00:40:27.187 09:07:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:27.187 09:07:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:27.187 09:07:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:40:27.187 09:07:02 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:40:27.187 09:07:02 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:40:27.187 09:07:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:40:27.187 09:07:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:27.187 09:07:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:27.187 ************************************ 00:40:27.187 START TEST raid_rebuild_test_sb_md_separate 00:40:27.187 ************************************ 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:40:27.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=164803 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 164803 /var/tmp/spdk-raid.sock 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 164803 ']' 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:27.187 09:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:27.187 [2024-07-12 09:07:02.239277] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:40:27.187 [2024-07-12 09:07:02.240130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164803 ] 00:40:27.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:27.187 Zero copy mechanism will not be used. 00:40:27.444 [2024-07-12 09:07:02.411726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.701 [2024-07-12 09:07:02.661876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.701 [2024-07-12 09:07:02.861145] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:28.264 09:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:28.264 09:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:40:28.264 09:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:40:28.264 09:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:40:28.520 BaseBdev1_malloc 00:40:28.520 09:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:28.776 [2024-07-12 09:07:03.755735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:28.776 [2024-07-12 09:07:03.755896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:28.776 [2024-07-12 09:07:03.755953] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:40:28.776 [2024-07-12 09:07:03.755981] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:28.776 [2024-07-12 09:07:03.758355] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:28.776 [2024-07-12 09:07:03.758414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:28.776 BaseBdev1 00:40:28.776 09:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:40:28.776 09:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:40:29.032 BaseBdev2_malloc 00:40:29.032 09:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:29.288 [2024-07-12 09:07:04.328869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:29.288 [2024-07-12 09:07:04.329029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:29.288 [2024-07-12 09:07:04.329085] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:40:29.288 [2024-07-12 09:07:04.329112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:29.288 [2024-07-12 09:07:04.331485] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:29.288 [2024-07-12 09:07:04.331546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:29.288 BaseBdev2 00:40:29.288 09:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:40:29.544 spare_malloc 00:40:29.544 09:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:29.898 spare_delay 00:40:29.898 09:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:30.155 [2024-07-12 09:07:05.225112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:30.155 [2024-07-12 09:07:05.225255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:30.155 [2024-07-12 09:07:05.225307] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:40:30.155 [2024-07-12 09:07:05.225343] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:30.155 [2024-07-12 09:07:05.227706] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:30.155 [2024-07-12 09:07:05.227772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:30.155 spare 00:40:30.155 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:40:30.412 [2024-07-12 09:07:05.465239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:30.412 [2024-07-12 09:07:05.467410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:30.412 [2024-07-12 09:07:05.467684] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:40:30.412 [2024-07-12 09:07:05.467709] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:30.412 [2024-07-12 09:07:05.467895] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:40:30.412 [2024-07-12 09:07:05.468041] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:40:30.412 [2024-07-12 09:07:05.468057] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:40:30.412 [2024-07-12 09:07:05.468185] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:30.412 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:30.670 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:30.670 "name": "raid_bdev1", 00:40:30.670 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:30.670 "strip_size_kb": 0, 00:40:30.670 "state": "online", 00:40:30.670 "raid_level": "raid1", 00:40:30.670 "superblock": true, 00:40:30.670 "num_base_bdevs": 2, 00:40:30.670 "num_base_bdevs_discovered": 2, 00:40:30.670 "num_base_bdevs_operational": 2, 00:40:30.670 "base_bdevs_list": [ 00:40:30.670 { 00:40:30.670 "name": "BaseBdev1", 00:40:30.670 "uuid": "e3d39211-665e-52b7-8705-1fff0288cea3", 00:40:30.670 "is_configured": true, 00:40:30.670 "data_offset": 256, 00:40:30.670 "data_size": 7936 00:40:30.670 }, 00:40:30.670 { 00:40:30.670 "name": "BaseBdev2", 00:40:30.670 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:30.670 "is_configured": true, 00:40:30.670 "data_offset": 256, 00:40:30.670 "data_size": 7936 00:40:30.670 } 00:40:30.670 ] 00:40:30.670 }' 00:40:30.670 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:30.670 09:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:31.600 09:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:31.600 09:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:40:31.600 [2024-07-12 09:07:06.761742] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:31.600 09:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:40:31.600 09:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:31.600 09:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:31.858 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:32.115 [2024-07-12 09:07:07.281589] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:40:32.115 /dev/nbd0 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:32.383 1+0 records in 00:40:32.383 1+0 records out 00:40:32.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369252 s, 11.1 MB/s 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:40:32.383 09:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:40:33.318 7936+0 records in 00:40:33.318 7936+0 records out 00:40:33.318 32505856 bytes (33 MB, 31 MiB) copied, 0.948455 s, 34.3 MB/s 00:40:33.318 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:40:33.318 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:33.318 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:40:33.318 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:33.318 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:40:33.318 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:33.318 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:33.577 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:33.577 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:33.578 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:33.578 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:33.578 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:33.578 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:33.578 [2024-07-12 09:07:08.594371] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:33.578 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:40:33.578 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:40:33.578 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:40:33.863 [2024-07-12 09:07:08.854030] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:33.863 09:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:34.121 09:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:34.121 "name": "raid_bdev1", 00:40:34.121 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:34.121 "strip_size_kb": 0, 00:40:34.121 "state": "online", 00:40:34.121 "raid_level": "raid1", 00:40:34.121 "superblock": true, 00:40:34.121 "num_base_bdevs": 2, 00:40:34.121 "num_base_bdevs_discovered": 1, 00:40:34.121 "num_base_bdevs_operational": 1, 00:40:34.121 "base_bdevs_list": [ 00:40:34.121 { 00:40:34.121 "name": null, 00:40:34.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:34.121 "is_configured": false, 00:40:34.121 "data_offset": 256, 00:40:34.121 "data_size": 7936 00:40:34.121 }, 00:40:34.121 { 00:40:34.121 "name": "BaseBdev2", 00:40:34.121 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:34.121 "is_configured": true, 00:40:34.121 "data_offset": 256, 00:40:34.121 "data_size": 7936 00:40:34.121 } 00:40:34.121 ] 00:40:34.121 }' 00:40:34.121 09:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:34.121 09:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:34.688 09:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:34.947 [2024-07-12 09:07:10.118292] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:34.947 [2024-07-12 09:07:10.131726] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:40:34.947 [2024-07-12 09:07:10.133983] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:35.204 09:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:40:36.137 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:36.137 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:36.137 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:36.137 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:36.137 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:36.137 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:36.137 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:36.395 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:36.395 "name": "raid_bdev1", 00:40:36.395 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:36.395 "strip_size_kb": 0, 00:40:36.395 "state": "online", 00:40:36.395 "raid_level": "raid1", 00:40:36.395 "superblock": true, 00:40:36.395 "num_base_bdevs": 2, 00:40:36.395 "num_base_bdevs_discovered": 2, 00:40:36.395 "num_base_bdevs_operational": 2, 00:40:36.395 "process": { 00:40:36.395 "type": "rebuild", 00:40:36.395 "target": "spare", 00:40:36.395 "progress": { 00:40:36.395 "blocks": 3072, 00:40:36.395 "percent": 38 00:40:36.395 } 00:40:36.395 }, 00:40:36.395 "base_bdevs_list": [ 00:40:36.395 { 00:40:36.395 "name": "spare", 00:40:36.395 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:36.395 "is_configured": true, 00:40:36.395 "data_offset": 256, 00:40:36.395 "data_size": 7936 00:40:36.395 }, 00:40:36.395 { 00:40:36.395 "name": "BaseBdev2", 00:40:36.395 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:36.395 "is_configured": true, 00:40:36.395 "data_offset": 256, 00:40:36.395 "data_size": 7936 00:40:36.395 } 00:40:36.395 ] 00:40:36.395 }' 00:40:36.395 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:36.395 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:36.395 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:36.395 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:36.395 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:40:36.653 [2024-07-12 09:07:11.776681] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:36.653 [2024-07-12 09:07:11.845926] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:36.653 [2024-07-12 09:07:11.846054] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:36.653 [2024-07-12 09:07:11.846080] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:36.653 [2024-07-12 09:07:11.846091] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:36.911 09:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:37.169 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:37.169 "name": "raid_bdev1", 00:40:37.169 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:37.169 "strip_size_kb": 0, 00:40:37.169 "state": "online", 00:40:37.169 "raid_level": "raid1", 00:40:37.169 "superblock": true, 00:40:37.169 "num_base_bdevs": 2, 00:40:37.169 "num_base_bdevs_discovered": 1, 00:40:37.169 "num_base_bdevs_operational": 1, 00:40:37.169 "base_bdevs_list": [ 00:40:37.169 { 00:40:37.169 "name": null, 00:40:37.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:37.169 "is_configured": false, 00:40:37.169 "data_offset": 256, 00:40:37.169 "data_size": 7936 00:40:37.169 }, 00:40:37.169 { 00:40:37.169 "name": "BaseBdev2", 00:40:37.169 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:37.169 "is_configured": true, 00:40:37.169 "data_offset": 256, 00:40:37.169 "data_size": 7936 00:40:37.169 } 00:40:37.169 ] 00:40:37.169 }' 00:40:37.169 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:37.169 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:37.734 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:37.734 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:37.734 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:37.734 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:37.734 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:37.734 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:37.734 09:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:38.016 09:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:38.016 "name": "raid_bdev1", 00:40:38.016 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:38.016 "strip_size_kb": 0, 00:40:38.016 "state": "online", 00:40:38.016 "raid_level": "raid1", 00:40:38.016 "superblock": true, 00:40:38.016 "num_base_bdevs": 2, 00:40:38.016 "num_base_bdevs_discovered": 1, 00:40:38.016 "num_base_bdevs_operational": 1, 00:40:38.016 "base_bdevs_list": [ 00:40:38.016 { 00:40:38.016 "name": null, 00:40:38.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:38.016 "is_configured": false, 00:40:38.016 "data_offset": 256, 00:40:38.016 "data_size": 7936 00:40:38.016 }, 00:40:38.016 { 00:40:38.016 "name": "BaseBdev2", 00:40:38.016 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:38.016 "is_configured": true, 00:40:38.016 "data_offset": 256, 00:40:38.016 "data_size": 7936 00:40:38.016 } 00:40:38.016 ] 00:40:38.016 }' 00:40:38.016 09:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:38.016 09:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:38.016 09:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:38.307 09:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:38.307 09:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:38.307 [2024-07-12 09:07:13.416445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:38.307 [2024-07-12 09:07:13.429021] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:40:38.307 [2024-07-12 09:07:13.431170] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:38.307 09:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:39.681 "name": "raid_bdev1", 00:40:39.681 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:39.681 "strip_size_kb": 0, 00:40:39.681 "state": "online", 00:40:39.681 "raid_level": "raid1", 00:40:39.681 "superblock": true, 00:40:39.681 "num_base_bdevs": 2, 00:40:39.681 "num_base_bdevs_discovered": 2, 00:40:39.681 "num_base_bdevs_operational": 2, 00:40:39.681 "process": { 00:40:39.681 "type": "rebuild", 00:40:39.681 "target": "spare", 00:40:39.681 "progress": { 00:40:39.681 "blocks": 3072, 00:40:39.681 "percent": 38 00:40:39.681 } 00:40:39.681 }, 00:40:39.681 "base_bdevs_list": [ 00:40:39.681 { 00:40:39.681 "name": "spare", 00:40:39.681 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:39.681 "is_configured": true, 00:40:39.681 "data_offset": 256, 00:40:39.681 "data_size": 7936 00:40:39.681 }, 00:40:39.681 { 00:40:39.681 "name": "BaseBdev2", 00:40:39.681 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:39.681 "is_configured": true, 00:40:39.681 "data_offset": 256, 00:40:39.681 "data_size": 7936 00:40:39.681 } 00:40:39.681 ] 00:40:39.681 }' 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:40:39.681 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1533 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:39.681 09:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:39.939 09:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:39.939 "name": "raid_bdev1", 00:40:39.939 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:39.939 "strip_size_kb": 0, 00:40:39.939 "state": "online", 00:40:39.939 "raid_level": "raid1", 00:40:39.939 "superblock": true, 00:40:39.939 "num_base_bdevs": 2, 00:40:39.939 "num_base_bdevs_discovered": 2, 00:40:39.939 "num_base_bdevs_operational": 2, 00:40:39.939 "process": { 00:40:39.939 "type": "rebuild", 00:40:39.939 "target": "spare", 00:40:39.939 "progress": { 00:40:39.939 "blocks": 4096, 00:40:39.939 "percent": 51 00:40:39.939 } 00:40:39.939 }, 00:40:39.939 "base_bdevs_list": [ 00:40:39.939 { 00:40:39.939 "name": "spare", 00:40:39.939 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:39.939 "is_configured": true, 00:40:39.939 "data_offset": 256, 00:40:39.939 "data_size": 7936 00:40:39.939 }, 00:40:39.939 { 00:40:39.939 "name": "BaseBdev2", 00:40:39.939 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:39.939 "is_configured": true, 00:40:39.939 "data_offset": 256, 00:40:39.939 "data_size": 7936 00:40:39.939 } 00:40:39.939 ] 00:40:39.939 }' 00:40:39.939 09:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:40.197 09:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:40.197 09:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:40.197 09:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:40.197 09:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:41.131 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:41.389 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:41.389 "name": "raid_bdev1", 00:40:41.389 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:41.389 "strip_size_kb": 0, 00:40:41.389 "state": "online", 00:40:41.389 "raid_level": "raid1", 00:40:41.389 "superblock": true, 00:40:41.389 "num_base_bdevs": 2, 00:40:41.389 "num_base_bdevs_discovered": 2, 00:40:41.389 "num_base_bdevs_operational": 2, 00:40:41.389 "process": { 00:40:41.389 "type": "rebuild", 00:40:41.389 "target": "spare", 00:40:41.389 "progress": { 00:40:41.389 "blocks": 7680, 00:40:41.389 "percent": 96 00:40:41.389 } 00:40:41.389 }, 00:40:41.389 "base_bdevs_list": [ 00:40:41.389 { 00:40:41.389 "name": "spare", 00:40:41.389 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:41.389 "is_configured": true, 00:40:41.389 "data_offset": 256, 00:40:41.389 "data_size": 7936 00:40:41.389 }, 00:40:41.389 { 00:40:41.389 "name": "BaseBdev2", 00:40:41.389 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:41.389 "is_configured": true, 00:40:41.389 "data_offset": 256, 00:40:41.389 "data_size": 7936 00:40:41.389 } 00:40:41.389 ] 00:40:41.389 }' 00:40:41.389 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:41.389 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:41.389 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:41.389 [2024-07-12 09:07:16.553248] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:41.389 [2024-07-12 09:07:16.553331] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:41.389 [2024-07-12 09:07:16.553522] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:41.647 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:41.647 09:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:42.581 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:42.840 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:42.840 "name": "raid_bdev1", 00:40:42.840 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:42.840 "strip_size_kb": 0, 00:40:42.840 "state": "online", 00:40:42.840 "raid_level": "raid1", 00:40:42.840 "superblock": true, 00:40:42.840 "num_base_bdevs": 2, 00:40:42.840 "num_base_bdevs_discovered": 2, 00:40:42.840 "num_base_bdevs_operational": 2, 00:40:42.840 "base_bdevs_list": [ 00:40:42.840 { 00:40:42.840 "name": "spare", 00:40:42.840 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:42.840 "is_configured": true, 00:40:42.840 "data_offset": 256, 00:40:42.840 "data_size": 7936 00:40:42.840 }, 00:40:42.840 { 00:40:42.840 "name": "BaseBdev2", 00:40:42.840 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:42.840 "is_configured": true, 00:40:42.840 "data_offset": 256, 00:40:42.840 "data_size": 7936 00:40:42.840 } 00:40:42.840 ] 00:40:42.840 }' 00:40:42.840 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:42.840 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:42.840 09:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:42.840 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:43.407 "name": "raid_bdev1", 00:40:43.407 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:43.407 "strip_size_kb": 0, 00:40:43.407 "state": "online", 00:40:43.407 "raid_level": "raid1", 00:40:43.407 "superblock": true, 00:40:43.407 "num_base_bdevs": 2, 00:40:43.407 "num_base_bdevs_discovered": 2, 00:40:43.407 "num_base_bdevs_operational": 2, 00:40:43.407 "base_bdevs_list": [ 00:40:43.407 { 00:40:43.407 "name": "spare", 00:40:43.407 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:43.407 "is_configured": true, 00:40:43.407 "data_offset": 256, 00:40:43.407 "data_size": 7936 00:40:43.407 }, 00:40:43.407 { 00:40:43.407 "name": "BaseBdev2", 00:40:43.407 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:43.407 "is_configured": true, 00:40:43.407 "data_offset": 256, 00:40:43.407 "data_size": 7936 00:40:43.407 } 00:40:43.407 ] 00:40:43.407 }' 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:43.407 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:43.665 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:43.665 "name": "raid_bdev1", 00:40:43.665 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:43.665 "strip_size_kb": 0, 00:40:43.665 "state": "online", 00:40:43.665 "raid_level": "raid1", 00:40:43.665 "superblock": true, 00:40:43.665 "num_base_bdevs": 2, 00:40:43.665 "num_base_bdevs_discovered": 2, 00:40:43.665 "num_base_bdevs_operational": 2, 00:40:43.665 "base_bdevs_list": [ 00:40:43.665 { 00:40:43.665 "name": "spare", 00:40:43.665 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:43.665 "is_configured": true, 00:40:43.665 "data_offset": 256, 00:40:43.665 "data_size": 7936 00:40:43.665 }, 00:40:43.665 { 00:40:43.665 "name": "BaseBdev2", 00:40:43.665 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:43.665 "is_configured": true, 00:40:43.665 "data_offset": 256, 00:40:43.665 "data_size": 7936 00:40:43.665 } 00:40:43.665 ] 00:40:43.665 }' 00:40:43.665 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:43.665 09:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:44.231 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:44.489 [2024-07-12 09:07:19.648243] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:44.489 [2024-07-12 09:07:19.648307] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:44.489 [2024-07-12 09:07:19.648412] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:44.489 [2024-07-12 09:07:19.648500] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:44.489 [2024-07-12 09:07:19.648516] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:40:44.489 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:44.489 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:44.747 09:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:45.005 /dev/nbd0 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:45.264 1+0 records in 00:40:45.264 1+0 records out 00:40:45.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512548 s, 8.0 MB/s 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:45.264 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:40:45.523 /dev/nbd1 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:45.523 1+0 records in 00:40:45.523 1+0 records out 00:40:45.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319768 s, 12.8 MB/s 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:45.523 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:45.782 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:45.782 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:45.782 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:45.782 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:45.782 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:45.782 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:45.782 09:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:40:46.040 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:40:46.040 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:46.040 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:46.040 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:40:46.040 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:40:46.040 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:46.040 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:40:46.299 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:40:46.564 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:46.833 [2024-07-12 09:07:21.979461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:46.833 [2024-07-12 09:07:21.979584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:46.833 [2024-07-12 09:07:21.979660] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:40:46.833 [2024-07-12 09:07:21.979686] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:46.833 [2024-07-12 09:07:21.982076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:46.833 [2024-07-12 09:07:21.982133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:46.833 [2024-07-12 09:07:21.982264] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:46.833 [2024-07-12 09:07:21.982329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:46.833 [2024-07-12 09:07:21.982479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:46.833 spare 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:46.833 09:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:47.091 [2024-07-12 09:07:22.082604] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:40:47.091 [2024-07-12 09:07:22.082684] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:40:47.091 [2024-07-12 09:07:22.083013] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:40:47.091 [2024-07-12 09:07:22.083312] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:40:47.091 [2024-07-12 09:07:22.083348] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:40:47.091 [2024-07-12 09:07:22.083601] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:47.349 09:07:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:47.349 "name": "raid_bdev1", 00:40:47.349 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:47.349 "strip_size_kb": 0, 00:40:47.349 "state": "online", 00:40:47.349 "raid_level": "raid1", 00:40:47.349 "superblock": true, 00:40:47.349 "num_base_bdevs": 2, 00:40:47.349 "num_base_bdevs_discovered": 2, 00:40:47.349 "num_base_bdevs_operational": 2, 00:40:47.349 "base_bdevs_list": [ 00:40:47.349 { 00:40:47.349 "name": "spare", 00:40:47.349 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:47.349 "is_configured": true, 00:40:47.349 "data_offset": 256, 00:40:47.349 "data_size": 7936 00:40:47.349 }, 00:40:47.349 { 00:40:47.349 "name": "BaseBdev2", 00:40:47.349 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:47.349 "is_configured": true, 00:40:47.349 "data_offset": 256, 00:40:47.349 "data_size": 7936 00:40:47.349 } 00:40:47.349 ] 00:40:47.349 }' 00:40:47.349 09:07:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:47.349 09:07:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:47.916 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:47.916 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:47.916 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:47.916 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:47.916 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:47.916 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:47.916 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:48.190 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:48.190 "name": "raid_bdev1", 00:40:48.190 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:48.190 "strip_size_kb": 0, 00:40:48.190 "state": "online", 00:40:48.190 "raid_level": "raid1", 00:40:48.190 "superblock": true, 00:40:48.190 "num_base_bdevs": 2, 00:40:48.190 "num_base_bdevs_discovered": 2, 00:40:48.190 "num_base_bdevs_operational": 2, 00:40:48.190 "base_bdevs_list": [ 00:40:48.190 { 00:40:48.190 "name": "spare", 00:40:48.190 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:48.190 "is_configured": true, 00:40:48.190 "data_offset": 256, 00:40:48.190 "data_size": 7936 00:40:48.190 }, 00:40:48.190 { 00:40:48.190 "name": "BaseBdev2", 00:40:48.190 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:48.190 "is_configured": true, 00:40:48.190 "data_offset": 256, 00:40:48.190 "data_size": 7936 00:40:48.190 } 00:40:48.190 ] 00:40:48.190 }' 00:40:48.190 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:48.468 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:48.468 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:48.468 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:48.468 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:48.468 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:40:48.727 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:40:48.727 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:40:48.987 [2024-07-12 09:07:23.984021] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:48.987 09:07:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:49.246 09:07:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:49.246 "name": "raid_bdev1", 00:40:49.246 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:49.246 "strip_size_kb": 0, 00:40:49.246 "state": "online", 00:40:49.246 "raid_level": "raid1", 00:40:49.246 "superblock": true, 00:40:49.246 "num_base_bdevs": 2, 00:40:49.246 "num_base_bdevs_discovered": 1, 00:40:49.246 "num_base_bdevs_operational": 1, 00:40:49.246 "base_bdevs_list": [ 00:40:49.246 { 00:40:49.246 "name": null, 00:40:49.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:49.246 "is_configured": false, 00:40:49.246 "data_offset": 256, 00:40:49.246 "data_size": 7936 00:40:49.246 }, 00:40:49.246 { 00:40:49.246 "name": "BaseBdev2", 00:40:49.246 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:49.246 "is_configured": true, 00:40:49.246 "data_offset": 256, 00:40:49.246 "data_size": 7936 00:40:49.246 } 00:40:49.246 ] 00:40:49.246 }' 00:40:49.246 09:07:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:49.246 09:07:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:49.814 09:07:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:50.071 [2024-07-12 09:07:25.168319] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:50.071 [2024-07-12 09:07:25.168678] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:50.071 [2024-07-12 09:07:25.168721] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:50.071 [2024-07-12 09:07:25.168813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:50.071 [2024-07-12 09:07:25.182261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:40:50.071 [2024-07-12 09:07:25.184409] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:50.072 09:07:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:40:51.007 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:51.007 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:51.007 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:51.007 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:51.007 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:51.265 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:51.265 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:51.524 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:51.524 "name": "raid_bdev1", 00:40:51.524 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:51.524 "strip_size_kb": 0, 00:40:51.524 "state": "online", 00:40:51.524 "raid_level": "raid1", 00:40:51.524 "superblock": true, 00:40:51.524 "num_base_bdevs": 2, 00:40:51.524 "num_base_bdevs_discovered": 2, 00:40:51.524 "num_base_bdevs_operational": 2, 00:40:51.524 "process": { 00:40:51.524 "type": "rebuild", 00:40:51.524 "target": "spare", 00:40:51.524 "progress": { 00:40:51.524 "blocks": 3328, 00:40:51.524 "percent": 41 00:40:51.524 } 00:40:51.524 }, 00:40:51.524 "base_bdevs_list": [ 00:40:51.524 { 00:40:51.524 "name": "spare", 00:40:51.524 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:51.524 "is_configured": true, 00:40:51.524 "data_offset": 256, 00:40:51.524 "data_size": 7936 00:40:51.524 }, 00:40:51.524 { 00:40:51.524 "name": "BaseBdev2", 00:40:51.524 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:51.524 "is_configured": true, 00:40:51.524 "data_offset": 256, 00:40:51.524 "data_size": 7936 00:40:51.524 } 00:40:51.524 ] 00:40:51.524 }' 00:40:51.524 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:51.524 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:51.524 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:51.524 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:51.524 09:07:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:40:51.782 [2024-07-12 09:07:26.903036] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:52.040 [2024-07-12 09:07:26.996884] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:52.040 [2024-07-12 09:07:26.997016] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:52.040 [2024-07-12 09:07:26.997041] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:52.040 [2024-07-12 09:07:26.997052] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:52.040 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:52.041 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:52.041 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:52.299 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:52.299 "name": "raid_bdev1", 00:40:52.299 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:52.299 "strip_size_kb": 0, 00:40:52.299 "state": "online", 00:40:52.299 "raid_level": "raid1", 00:40:52.299 "superblock": true, 00:40:52.299 "num_base_bdevs": 2, 00:40:52.299 "num_base_bdevs_discovered": 1, 00:40:52.299 "num_base_bdevs_operational": 1, 00:40:52.299 "base_bdevs_list": [ 00:40:52.299 { 00:40:52.299 "name": null, 00:40:52.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:52.299 "is_configured": false, 00:40:52.299 "data_offset": 256, 00:40:52.299 "data_size": 7936 00:40:52.299 }, 00:40:52.299 { 00:40:52.299 "name": "BaseBdev2", 00:40:52.299 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:52.299 "is_configured": true, 00:40:52.299 "data_offset": 256, 00:40:52.299 "data_size": 7936 00:40:52.299 } 00:40:52.299 ] 00:40:52.299 }' 00:40:52.299 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:52.299 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:52.865 09:07:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:53.123 [2024-07-12 09:07:28.279277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:53.123 [2024-07-12 09:07:28.279395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:53.123 [2024-07-12 09:07:28.279444] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:40:53.123 [2024-07-12 09:07:28.279476] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:53.123 [2024-07-12 09:07:28.279834] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:53.123 [2024-07-12 09:07:28.279885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:53.123 [2024-07-12 09:07:28.280012] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:53.123 [2024-07-12 09:07:28.280031] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:40:53.123 [2024-07-12 09:07:28.280041] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:53.123 [2024-07-12 09:07:28.280092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:53.123 [2024-07-12 09:07:28.292472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:40:53.123 spare 00:40:53.123 [2024-07-12 09:07:28.294571] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:53.123 09:07:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:54.540 "name": "raid_bdev1", 00:40:54.540 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:54.540 "strip_size_kb": 0, 00:40:54.540 "state": "online", 00:40:54.540 "raid_level": "raid1", 00:40:54.540 "superblock": true, 00:40:54.540 "num_base_bdevs": 2, 00:40:54.540 "num_base_bdevs_discovered": 2, 00:40:54.540 "num_base_bdevs_operational": 2, 00:40:54.540 "process": { 00:40:54.540 "type": "rebuild", 00:40:54.540 "target": "spare", 00:40:54.540 "progress": { 00:40:54.540 "blocks": 3072, 00:40:54.540 "percent": 38 00:40:54.540 } 00:40:54.540 }, 00:40:54.540 "base_bdevs_list": [ 00:40:54.540 { 00:40:54.540 "name": "spare", 00:40:54.540 "uuid": "d9bb18e8-d108-5576-8416-f5a547565e43", 00:40:54.540 "is_configured": true, 00:40:54.540 "data_offset": 256, 00:40:54.540 "data_size": 7936 00:40:54.540 }, 00:40:54.540 { 00:40:54.540 "name": "BaseBdev2", 00:40:54.540 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:54.540 "is_configured": true, 00:40:54.540 "data_offset": 256, 00:40:54.540 "data_size": 7936 00:40:54.540 } 00:40:54.540 ] 00:40:54.540 }' 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:40:54.540 09:07:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:40:54.797 [2024-07-12 09:07:29.977632] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:55.055 [2024-07-12 09:07:30.006838] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:55.055 [2024-07-12 09:07:30.006977] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:55.055 [2024-07-12 09:07:30.007000] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:55.055 [2024-07-12 09:07:30.007011] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:55.055 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:55.313 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:55.313 "name": "raid_bdev1", 00:40:55.313 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:55.313 "strip_size_kb": 0, 00:40:55.313 "state": "online", 00:40:55.313 "raid_level": "raid1", 00:40:55.313 "superblock": true, 00:40:55.313 "num_base_bdevs": 2, 00:40:55.313 "num_base_bdevs_discovered": 1, 00:40:55.313 "num_base_bdevs_operational": 1, 00:40:55.313 "base_bdevs_list": [ 00:40:55.313 { 00:40:55.313 "name": null, 00:40:55.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:55.313 "is_configured": false, 00:40:55.313 "data_offset": 256, 00:40:55.313 "data_size": 7936 00:40:55.313 }, 00:40:55.313 { 00:40:55.313 "name": "BaseBdev2", 00:40:55.313 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:55.313 "is_configured": true, 00:40:55.313 "data_offset": 256, 00:40:55.313 "data_size": 7936 00:40:55.313 } 00:40:55.313 ] 00:40:55.313 }' 00:40:55.313 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:55.313 09:07:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:55.878 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:55.878 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:55.878 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:55.878 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:55.878 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:55.878 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:55.878 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:56.135 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:56.135 "name": "raid_bdev1", 00:40:56.135 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:56.135 "strip_size_kb": 0, 00:40:56.135 "state": "online", 00:40:56.135 "raid_level": "raid1", 00:40:56.135 "superblock": true, 00:40:56.135 "num_base_bdevs": 2, 00:40:56.135 "num_base_bdevs_discovered": 1, 00:40:56.135 "num_base_bdevs_operational": 1, 00:40:56.135 "base_bdevs_list": [ 00:40:56.135 { 00:40:56.135 "name": null, 00:40:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:56.135 "is_configured": false, 00:40:56.135 "data_offset": 256, 00:40:56.135 "data_size": 7936 00:40:56.135 }, 00:40:56.135 { 00:40:56.135 "name": "BaseBdev2", 00:40:56.135 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:56.135 "is_configured": true, 00:40:56.135 "data_offset": 256, 00:40:56.135 "data_size": 7936 00:40:56.135 } 00:40:56.135 ] 00:40:56.135 }' 00:40:56.135 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:56.393 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:56.393 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:56.393 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:56.393 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:40:56.650 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:56.908 [2024-07-12 09:07:31.877934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:56.908 [2024-07-12 09:07:31.878040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:56.908 [2024-07-12 09:07:31.878087] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:40:56.908 [2024-07-12 09:07:31.878117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:56.908 [2024-07-12 09:07:31.878392] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:56.908 [2024-07-12 09:07:31.878425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:56.908 [2024-07-12 09:07:31.878563] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:40:56.908 [2024-07-12 09:07:31.878583] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:56.908 [2024-07-12 09:07:31.878592] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:56.908 BaseBdev1 00:40:56.908 09:07:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:40:57.858 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:40:57.859 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:57.859 09:07:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:58.116 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:40:58.116 "name": "raid_bdev1", 00:40:58.116 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:58.116 "strip_size_kb": 0, 00:40:58.116 "state": "online", 00:40:58.116 "raid_level": "raid1", 00:40:58.116 "superblock": true, 00:40:58.116 "num_base_bdevs": 2, 00:40:58.116 "num_base_bdevs_discovered": 1, 00:40:58.116 "num_base_bdevs_operational": 1, 00:40:58.116 "base_bdevs_list": [ 00:40:58.116 { 00:40:58.116 "name": null, 00:40:58.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:58.116 "is_configured": false, 00:40:58.116 "data_offset": 256, 00:40:58.116 "data_size": 7936 00:40:58.116 }, 00:40:58.116 { 00:40:58.116 "name": "BaseBdev2", 00:40:58.116 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:58.116 "is_configured": true, 00:40:58.116 "data_offset": 256, 00:40:58.116 "data_size": 7936 00:40:58.116 } 00:40:58.116 ] 00:40:58.116 }' 00:40:58.116 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:40:58.116 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:40:59.046 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:59.046 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:40:59.046 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:40:59.046 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:40:59.046 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:40:59.046 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:59.046 09:07:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:59.046 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:59.046 "name": "raid_bdev1", 00:40:59.046 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:40:59.046 "strip_size_kb": 0, 00:40:59.046 "state": "online", 00:40:59.046 "raid_level": "raid1", 00:40:59.046 "superblock": true, 00:40:59.046 "num_base_bdevs": 2, 00:40:59.046 "num_base_bdevs_discovered": 1, 00:40:59.046 "num_base_bdevs_operational": 1, 00:40:59.046 "base_bdevs_list": [ 00:40:59.046 { 00:40:59.046 "name": null, 00:40:59.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:59.046 "is_configured": false, 00:40:59.046 "data_offset": 256, 00:40:59.046 "data_size": 7936 00:40:59.046 }, 00:40:59.046 { 00:40:59.046 "name": "BaseBdev2", 00:40:59.046 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:40:59.046 "is_configured": true, 00:40:59.046 "data_offset": 256, 00:40:59.046 "data_size": 7936 00:40:59.046 } 00:40:59.046 ] 00:40:59.046 }' 00:40:59.046 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:59.303 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:59.560 [2024-07-12 09:07:34.530551] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:59.560 [2024-07-12 09:07:34.530747] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:40:59.560 [2024-07-12 09:07:34.530764] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:59.560 request: 00:40:59.560 { 00:40:59.560 "base_bdev": "BaseBdev1", 00:40:59.560 "raid_bdev": "raid_bdev1", 00:40:59.560 "method": "bdev_raid_add_base_bdev", 00:40:59.560 "req_id": 1 00:40:59.560 } 00:40:59.560 Got JSON-RPC error response 00:40:59.560 response: 00:40:59.560 { 00:40:59.560 "code": -22, 00:40:59.560 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:40:59.560 } 00:40:59.560 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:40:59.560 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:59.560 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:59.560 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:59.560 09:07:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:00.493 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:00.751 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:00.751 "name": "raid_bdev1", 00:41:00.751 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:41:00.751 "strip_size_kb": 0, 00:41:00.751 "state": "online", 00:41:00.751 "raid_level": "raid1", 00:41:00.751 "superblock": true, 00:41:00.751 "num_base_bdevs": 2, 00:41:00.751 "num_base_bdevs_discovered": 1, 00:41:00.751 "num_base_bdevs_operational": 1, 00:41:00.751 "base_bdevs_list": [ 00:41:00.751 { 00:41:00.751 "name": null, 00:41:00.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:00.751 "is_configured": false, 00:41:00.751 "data_offset": 256, 00:41:00.751 "data_size": 7936 00:41:00.751 }, 00:41:00.751 { 00:41:00.751 "name": "BaseBdev2", 00:41:00.751 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:41:00.751 "is_configured": true, 00:41:00.751 "data_offset": 256, 00:41:00.751 "data_size": 7936 00:41:00.751 } 00:41:00.751 ] 00:41:00.751 }' 00:41:00.751 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:00.751 09:07:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:01.684 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:01.684 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:01.684 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:01.684 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:01.684 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:01.684 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:01.684 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:01.942 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:01.942 "name": "raid_bdev1", 00:41:01.942 "uuid": "c41154df-2d57-425b-98b2-a93fbb26d3e8", 00:41:01.942 "strip_size_kb": 0, 00:41:01.942 "state": "online", 00:41:01.942 "raid_level": "raid1", 00:41:01.942 "superblock": true, 00:41:01.942 "num_base_bdevs": 2, 00:41:01.942 "num_base_bdevs_discovered": 1, 00:41:01.942 "num_base_bdevs_operational": 1, 00:41:01.942 "base_bdevs_list": [ 00:41:01.942 { 00:41:01.942 "name": null, 00:41:01.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:01.942 "is_configured": false, 00:41:01.942 "data_offset": 256, 00:41:01.942 "data_size": 7936 00:41:01.942 }, 00:41:01.942 { 00:41:01.942 "name": "BaseBdev2", 00:41:01.942 "uuid": "3c2080a5-cab4-57ec-8b1c-072a8d1b4cfe", 00:41:01.942 "is_configured": true, 00:41:01.942 "data_offset": 256, 00:41:01.942 "data_size": 7936 00:41:01.942 } 00:41:01.942 ] 00:41:01.942 }' 00:41:01.942 09:07:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 164803 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 164803 ']' 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 164803 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164803 00:41:01.942 killing process with pid 164803 00:41:01.942 Received shutdown signal, test time was about 60.000000 seconds 00:41:01.942 00:41:01.942 Latency(us) 00:41:01.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:01.942 =================================================================================================================== 00:41:01.942 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164803' 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 164803 00:41:01.942 09:07:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 164803 00:41:01.942 [2024-07-12 09:07:37.077258] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:01.942 [2024-07-12 09:07:37.077390] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:01.942 [2024-07-12 09:07:37.077445] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:01.942 [2024-07-12 09:07:37.077457] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:41:02.201 [2024-07-12 09:07:37.352261] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:03.585 ************************************ 00:41:03.585 END TEST raid_rebuild_test_sb_md_separate 00:41:03.585 ************************************ 00:41:03.585 09:07:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:41:03.585 00:41:03.585 real 0m36.339s 00:41:03.585 user 0m59.158s 00:41:03.585 sys 0m3.738s 00:41:03.585 09:07:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:03.585 09:07:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:41:03.585 09:07:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:41:03.585 09:07:38 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:41:03.585 09:07:38 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:41:03.585 09:07:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:41:03.585 09:07:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:03.585 09:07:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:03.585 ************************************ 00:41:03.585 START TEST raid_state_function_test_sb_md_interleaved 00:41:03.585 ************************************ 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=165792 00:41:03.585 Process raid pid: 165792 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 165792' 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 165792 /var/tmp/spdk-raid.sock 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 165792 ']' 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:03.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:03.585 09:07:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:03.585 [2024-07-12 09:07:38.627154] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:41:03.585 [2024-07-12 09:07:38.627335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:03.844 [2024-07-12 09:07:38.784540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.844 [2024-07-12 09:07:39.014796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.102 [2024-07-12 09:07:39.220873] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:04.667 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:04.667 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:41:04.667 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:41:04.667 [2024-07-12 09:07:39.858180] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:04.667 [2024-07-12 09:07:39.858280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:04.667 [2024-07-12 09:07:39.858296] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:04.667 [2024-07-12 09:07:39.858325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:04.925 09:07:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:05.183 09:07:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:05.183 "name": "Existed_Raid", 00:41:05.183 "uuid": "f88cda66-e002-43f5-a9ee-32207dfee55f", 00:41:05.183 "strip_size_kb": 0, 00:41:05.183 "state": "configuring", 00:41:05.183 "raid_level": "raid1", 00:41:05.183 "superblock": true, 00:41:05.183 "num_base_bdevs": 2, 00:41:05.183 "num_base_bdevs_discovered": 0, 00:41:05.183 "num_base_bdevs_operational": 2, 00:41:05.183 "base_bdevs_list": [ 00:41:05.183 { 00:41:05.183 "name": "BaseBdev1", 00:41:05.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:05.183 "is_configured": false, 00:41:05.183 "data_offset": 0, 00:41:05.183 "data_size": 0 00:41:05.183 }, 00:41:05.183 { 00:41:05.183 "name": "BaseBdev2", 00:41:05.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:05.183 "is_configured": false, 00:41:05.183 "data_offset": 0, 00:41:05.183 "data_size": 0 00:41:05.183 } 00:41:05.183 ] 00:41:05.183 }' 00:41:05.183 09:07:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:05.183 09:07:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:05.750 09:07:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:41:06.008 [2024-07-12 09:07:41.138278] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:06.008 [2024-07-12 09:07:41.138342] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:41:06.008 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:41:06.266 [2024-07-12 09:07:41.426370] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:06.266 [2024-07-12 09:07:41.426447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:06.266 [2024-07-12 09:07:41.426460] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:06.266 [2024-07-12 09:07:41.426486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:06.266 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:41:06.832 [2024-07-12 09:07:41.749885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:06.832 BaseBdev1 00:41:06.832 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:41:06.832 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:41:06.832 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:41:06.832 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:41:06.832 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:41:06.832 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:41:06.832 09:07:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:07.089 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:07.346 [ 00:41:07.346 { 00:41:07.346 "name": "BaseBdev1", 00:41:07.346 "aliases": [ 00:41:07.346 "d2c97b4b-993a-4755-b8cb-7961b348ba42" 00:41:07.346 ], 00:41:07.346 "product_name": "Malloc disk", 00:41:07.346 "block_size": 4128, 00:41:07.346 "num_blocks": 8192, 00:41:07.346 "uuid": "d2c97b4b-993a-4755-b8cb-7961b348ba42", 00:41:07.346 "md_size": 32, 00:41:07.346 "md_interleave": true, 00:41:07.346 "dif_type": 0, 00:41:07.347 "assigned_rate_limits": { 00:41:07.347 "rw_ios_per_sec": 0, 00:41:07.347 "rw_mbytes_per_sec": 0, 00:41:07.347 "r_mbytes_per_sec": 0, 00:41:07.347 "w_mbytes_per_sec": 0 00:41:07.347 }, 00:41:07.347 "claimed": true, 00:41:07.347 "claim_type": "exclusive_write", 00:41:07.347 "zoned": false, 00:41:07.347 "supported_io_types": { 00:41:07.347 "read": true, 00:41:07.347 "write": true, 00:41:07.347 "unmap": true, 00:41:07.347 "flush": true, 00:41:07.347 "reset": true, 00:41:07.347 "nvme_admin": false, 00:41:07.347 "nvme_io": false, 00:41:07.347 "nvme_io_md": false, 00:41:07.347 "write_zeroes": true, 00:41:07.347 "zcopy": true, 00:41:07.347 "get_zone_info": false, 00:41:07.347 "zone_management": false, 00:41:07.347 "zone_append": false, 00:41:07.347 "compare": false, 00:41:07.347 "compare_and_write": false, 00:41:07.347 "abort": true, 00:41:07.347 "seek_hole": false, 00:41:07.347 "seek_data": false, 00:41:07.347 "copy": true, 00:41:07.347 "nvme_iov_md": false 00:41:07.347 }, 00:41:07.347 "memory_domains": [ 00:41:07.347 { 00:41:07.347 "dma_device_id": "system", 00:41:07.347 "dma_device_type": 1 00:41:07.347 }, 00:41:07.347 { 00:41:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:07.347 "dma_device_type": 2 00:41:07.347 } 00:41:07.347 ], 00:41:07.347 "driver_specific": {} 00:41:07.347 } 00:41:07.347 ] 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:07.347 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:07.605 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:07.605 "name": "Existed_Raid", 00:41:07.605 "uuid": "b24d2125-bad8-4ad0-a860-a45c4a989ae5", 00:41:07.605 "strip_size_kb": 0, 00:41:07.605 "state": "configuring", 00:41:07.605 "raid_level": "raid1", 00:41:07.605 "superblock": true, 00:41:07.605 "num_base_bdevs": 2, 00:41:07.605 "num_base_bdevs_discovered": 1, 00:41:07.605 "num_base_bdevs_operational": 2, 00:41:07.605 "base_bdevs_list": [ 00:41:07.605 { 00:41:07.605 "name": "BaseBdev1", 00:41:07.605 "uuid": "d2c97b4b-993a-4755-b8cb-7961b348ba42", 00:41:07.605 "is_configured": true, 00:41:07.605 "data_offset": 256, 00:41:07.605 "data_size": 7936 00:41:07.605 }, 00:41:07.605 { 00:41:07.605 "name": "BaseBdev2", 00:41:07.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:07.605 "is_configured": false, 00:41:07.605 "data_offset": 0, 00:41:07.605 "data_size": 0 00:41:07.605 } 00:41:07.605 ] 00:41:07.605 }' 00:41:07.605 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:07.605 09:07:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:08.169 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:41:08.427 [2024-07-12 09:07:43.578397] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:08.427 [2024-07-12 09:07:43.578467] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:41:08.427 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:41:08.684 [2024-07-12 09:07:43.870531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:08.684 [2024-07-12 09:07:43.872777] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:08.684 [2024-07-12 09:07:43.872840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:08.942 09:07:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:09.200 09:07:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:09.200 "name": "Existed_Raid", 00:41:09.200 "uuid": "65750b93-1f43-4107-a5f2-fb5b932bcc16", 00:41:09.200 "strip_size_kb": 0, 00:41:09.200 "state": "configuring", 00:41:09.200 "raid_level": "raid1", 00:41:09.200 "superblock": true, 00:41:09.200 "num_base_bdevs": 2, 00:41:09.200 "num_base_bdevs_discovered": 1, 00:41:09.200 "num_base_bdevs_operational": 2, 00:41:09.200 "base_bdevs_list": [ 00:41:09.200 { 00:41:09.200 "name": "BaseBdev1", 00:41:09.200 "uuid": "d2c97b4b-993a-4755-b8cb-7961b348ba42", 00:41:09.200 "is_configured": true, 00:41:09.200 "data_offset": 256, 00:41:09.200 "data_size": 7936 00:41:09.200 }, 00:41:09.200 { 00:41:09.200 "name": "BaseBdev2", 00:41:09.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:09.200 "is_configured": false, 00:41:09.200 "data_offset": 0, 00:41:09.200 "data_size": 0 00:41:09.200 } 00:41:09.200 ] 00:41:09.200 }' 00:41:09.200 09:07:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:09.200 09:07:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:09.765 09:07:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:41:10.329 [2024-07-12 09:07:45.227532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:10.329 [2024-07-12 09:07:45.227795] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:41:10.329 [2024-07-12 09:07:45.227813] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:10.329 [2024-07-12 09:07:45.227915] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:41:10.329 [2024-07-12 09:07:45.228028] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:41:10.329 [2024-07-12 09:07:45.228043] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:41:10.329 [2024-07-12 09:07:45.228145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:10.329 BaseBdev2 00:41:10.329 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:41:10.329 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:41:10.329 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:41:10.329 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:41:10.329 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:41:10.329 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:41:10.329 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:41:10.587 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:10.845 [ 00:41:10.845 { 00:41:10.845 "name": "BaseBdev2", 00:41:10.845 "aliases": [ 00:41:10.845 "59f9da3b-f0fa-4483-9a8d-f9ec387ed882" 00:41:10.845 ], 00:41:10.845 "product_name": "Malloc disk", 00:41:10.845 "block_size": 4128, 00:41:10.845 "num_blocks": 8192, 00:41:10.845 "uuid": "59f9da3b-f0fa-4483-9a8d-f9ec387ed882", 00:41:10.845 "md_size": 32, 00:41:10.845 "md_interleave": true, 00:41:10.845 "dif_type": 0, 00:41:10.845 "assigned_rate_limits": { 00:41:10.845 "rw_ios_per_sec": 0, 00:41:10.845 "rw_mbytes_per_sec": 0, 00:41:10.845 "r_mbytes_per_sec": 0, 00:41:10.845 "w_mbytes_per_sec": 0 00:41:10.845 }, 00:41:10.845 "claimed": true, 00:41:10.845 "claim_type": "exclusive_write", 00:41:10.845 "zoned": false, 00:41:10.845 "supported_io_types": { 00:41:10.845 "read": true, 00:41:10.845 "write": true, 00:41:10.845 "unmap": true, 00:41:10.845 "flush": true, 00:41:10.845 "reset": true, 00:41:10.845 "nvme_admin": false, 00:41:10.845 "nvme_io": false, 00:41:10.845 "nvme_io_md": false, 00:41:10.845 "write_zeroes": true, 00:41:10.845 "zcopy": true, 00:41:10.845 "get_zone_info": false, 00:41:10.845 "zone_management": false, 00:41:10.845 "zone_append": false, 00:41:10.845 "compare": false, 00:41:10.845 "compare_and_write": false, 00:41:10.845 "abort": true, 00:41:10.845 "seek_hole": false, 00:41:10.845 "seek_data": false, 00:41:10.845 "copy": true, 00:41:10.845 "nvme_iov_md": false 00:41:10.845 }, 00:41:10.845 "memory_domains": [ 00:41:10.845 { 00:41:10.845 "dma_device_id": "system", 00:41:10.845 "dma_device_type": 1 00:41:10.845 }, 00:41:10.845 { 00:41:10.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:10.845 "dma_device_type": 2 00:41:10.845 } 00:41:10.845 ], 00:41:10.845 "driver_specific": {} 00:41:10.845 } 00:41:10.845 ] 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:10.845 09:07:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:11.104 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:11.104 "name": "Existed_Raid", 00:41:11.104 "uuid": "65750b93-1f43-4107-a5f2-fb5b932bcc16", 00:41:11.104 "strip_size_kb": 0, 00:41:11.104 "state": "online", 00:41:11.104 "raid_level": "raid1", 00:41:11.104 "superblock": true, 00:41:11.104 "num_base_bdevs": 2, 00:41:11.104 "num_base_bdevs_discovered": 2, 00:41:11.104 "num_base_bdevs_operational": 2, 00:41:11.104 "base_bdevs_list": [ 00:41:11.104 { 00:41:11.104 "name": "BaseBdev1", 00:41:11.104 "uuid": "d2c97b4b-993a-4755-b8cb-7961b348ba42", 00:41:11.104 "is_configured": true, 00:41:11.104 "data_offset": 256, 00:41:11.104 "data_size": 7936 00:41:11.104 }, 00:41:11.104 { 00:41:11.104 "name": "BaseBdev2", 00:41:11.104 "uuid": "59f9da3b-f0fa-4483-9a8d-f9ec387ed882", 00:41:11.104 "is_configured": true, 00:41:11.104 "data_offset": 256, 00:41:11.104 "data_size": 7936 00:41:11.104 } 00:41:11.104 ] 00:41:11.104 }' 00:41:11.104 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:11.104 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:41:11.676 09:07:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:41:11.950 [2024-07-12 09:07:47.124397] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:12.220 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:41:12.220 "name": "Existed_Raid", 00:41:12.220 "aliases": [ 00:41:12.220 "65750b93-1f43-4107-a5f2-fb5b932bcc16" 00:41:12.220 ], 00:41:12.220 "product_name": "Raid Volume", 00:41:12.220 "block_size": 4128, 00:41:12.220 "num_blocks": 7936, 00:41:12.220 "uuid": "65750b93-1f43-4107-a5f2-fb5b932bcc16", 00:41:12.220 "md_size": 32, 00:41:12.220 "md_interleave": true, 00:41:12.220 "dif_type": 0, 00:41:12.220 "assigned_rate_limits": { 00:41:12.220 "rw_ios_per_sec": 0, 00:41:12.220 "rw_mbytes_per_sec": 0, 00:41:12.220 "r_mbytes_per_sec": 0, 00:41:12.220 "w_mbytes_per_sec": 0 00:41:12.220 }, 00:41:12.220 "claimed": false, 00:41:12.220 "zoned": false, 00:41:12.220 "supported_io_types": { 00:41:12.220 "read": true, 00:41:12.220 "write": true, 00:41:12.220 "unmap": false, 00:41:12.220 "flush": false, 00:41:12.220 "reset": true, 00:41:12.220 "nvme_admin": false, 00:41:12.220 "nvme_io": false, 00:41:12.220 "nvme_io_md": false, 00:41:12.220 "write_zeroes": true, 00:41:12.220 "zcopy": false, 00:41:12.220 "get_zone_info": false, 00:41:12.220 "zone_management": false, 00:41:12.220 "zone_append": false, 00:41:12.220 "compare": false, 00:41:12.220 "compare_and_write": false, 00:41:12.220 "abort": false, 00:41:12.220 "seek_hole": false, 00:41:12.220 "seek_data": false, 00:41:12.220 "copy": false, 00:41:12.220 "nvme_iov_md": false 00:41:12.220 }, 00:41:12.220 "memory_domains": [ 00:41:12.220 { 00:41:12.220 "dma_device_id": "system", 00:41:12.220 "dma_device_type": 1 00:41:12.220 }, 00:41:12.220 { 00:41:12.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:12.220 "dma_device_type": 2 00:41:12.220 }, 00:41:12.220 { 00:41:12.220 "dma_device_id": "system", 00:41:12.220 "dma_device_type": 1 00:41:12.220 }, 00:41:12.220 { 00:41:12.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:12.220 "dma_device_type": 2 00:41:12.220 } 00:41:12.220 ], 00:41:12.220 "driver_specific": { 00:41:12.220 "raid": { 00:41:12.220 "uuid": "65750b93-1f43-4107-a5f2-fb5b932bcc16", 00:41:12.220 "strip_size_kb": 0, 00:41:12.220 "state": "online", 00:41:12.220 "raid_level": "raid1", 00:41:12.220 "superblock": true, 00:41:12.220 "num_base_bdevs": 2, 00:41:12.220 "num_base_bdevs_discovered": 2, 00:41:12.220 "num_base_bdevs_operational": 2, 00:41:12.220 "base_bdevs_list": [ 00:41:12.220 { 00:41:12.220 "name": "BaseBdev1", 00:41:12.220 "uuid": "d2c97b4b-993a-4755-b8cb-7961b348ba42", 00:41:12.220 "is_configured": true, 00:41:12.220 "data_offset": 256, 00:41:12.220 "data_size": 7936 00:41:12.220 }, 00:41:12.220 { 00:41:12.220 "name": "BaseBdev2", 00:41:12.220 "uuid": "59f9da3b-f0fa-4483-9a8d-f9ec387ed882", 00:41:12.220 "is_configured": true, 00:41:12.220 "data_offset": 256, 00:41:12.220 "data_size": 7936 00:41:12.220 } 00:41:12.220 ] 00:41:12.220 } 00:41:12.220 } 00:41:12.220 }' 00:41:12.220 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:12.220 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:41:12.220 BaseBdev2' 00:41:12.220 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:12.220 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:41:12.220 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:12.479 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:12.479 "name": "BaseBdev1", 00:41:12.479 "aliases": [ 00:41:12.479 "d2c97b4b-993a-4755-b8cb-7961b348ba42" 00:41:12.479 ], 00:41:12.479 "product_name": "Malloc disk", 00:41:12.479 "block_size": 4128, 00:41:12.479 "num_blocks": 8192, 00:41:12.479 "uuid": "d2c97b4b-993a-4755-b8cb-7961b348ba42", 00:41:12.479 "md_size": 32, 00:41:12.479 "md_interleave": true, 00:41:12.479 "dif_type": 0, 00:41:12.479 "assigned_rate_limits": { 00:41:12.479 "rw_ios_per_sec": 0, 00:41:12.479 "rw_mbytes_per_sec": 0, 00:41:12.479 "r_mbytes_per_sec": 0, 00:41:12.479 "w_mbytes_per_sec": 0 00:41:12.479 }, 00:41:12.479 "claimed": true, 00:41:12.479 "claim_type": "exclusive_write", 00:41:12.479 "zoned": false, 00:41:12.479 "supported_io_types": { 00:41:12.479 "read": true, 00:41:12.479 "write": true, 00:41:12.479 "unmap": true, 00:41:12.479 "flush": true, 00:41:12.479 "reset": true, 00:41:12.479 "nvme_admin": false, 00:41:12.479 "nvme_io": false, 00:41:12.479 "nvme_io_md": false, 00:41:12.479 "write_zeroes": true, 00:41:12.479 "zcopy": true, 00:41:12.479 "get_zone_info": false, 00:41:12.479 "zone_management": false, 00:41:12.479 "zone_append": false, 00:41:12.479 "compare": false, 00:41:12.479 "compare_and_write": false, 00:41:12.479 "abort": true, 00:41:12.479 "seek_hole": false, 00:41:12.479 "seek_data": false, 00:41:12.479 "copy": true, 00:41:12.479 "nvme_iov_md": false 00:41:12.479 }, 00:41:12.479 "memory_domains": [ 00:41:12.479 { 00:41:12.479 "dma_device_id": "system", 00:41:12.479 "dma_device_type": 1 00:41:12.479 }, 00:41:12.479 { 00:41:12.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:12.479 "dma_device_type": 2 00:41:12.479 } 00:41:12.479 ], 00:41:12.479 "driver_specific": {} 00:41:12.479 }' 00:41:12.479 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:12.479 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:12.479 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:12.479 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:12.479 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:12.737 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:41:12.995 09:07:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:12.995 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:12.995 "name": "BaseBdev2", 00:41:12.995 "aliases": [ 00:41:12.995 "59f9da3b-f0fa-4483-9a8d-f9ec387ed882" 00:41:12.995 ], 00:41:12.995 "product_name": "Malloc disk", 00:41:12.995 "block_size": 4128, 00:41:12.995 "num_blocks": 8192, 00:41:12.995 "uuid": "59f9da3b-f0fa-4483-9a8d-f9ec387ed882", 00:41:12.995 "md_size": 32, 00:41:12.995 "md_interleave": true, 00:41:12.995 "dif_type": 0, 00:41:12.995 "assigned_rate_limits": { 00:41:12.995 "rw_ios_per_sec": 0, 00:41:12.995 "rw_mbytes_per_sec": 0, 00:41:12.995 "r_mbytes_per_sec": 0, 00:41:12.995 "w_mbytes_per_sec": 0 00:41:12.995 }, 00:41:12.995 "claimed": true, 00:41:12.995 "claim_type": "exclusive_write", 00:41:12.995 "zoned": false, 00:41:12.995 "supported_io_types": { 00:41:12.995 "read": true, 00:41:12.995 "write": true, 00:41:12.996 "unmap": true, 00:41:12.996 "flush": true, 00:41:12.996 "reset": true, 00:41:12.996 "nvme_admin": false, 00:41:12.996 "nvme_io": false, 00:41:12.996 "nvme_io_md": false, 00:41:12.996 "write_zeroes": true, 00:41:12.996 "zcopy": true, 00:41:12.996 "get_zone_info": false, 00:41:12.996 "zone_management": false, 00:41:12.996 "zone_append": false, 00:41:12.996 "compare": false, 00:41:12.996 "compare_and_write": false, 00:41:12.996 "abort": true, 00:41:12.996 "seek_hole": false, 00:41:12.996 "seek_data": false, 00:41:12.996 "copy": true, 00:41:12.996 "nvme_iov_md": false 00:41:12.996 }, 00:41:12.996 "memory_domains": [ 00:41:12.996 { 00:41:12.996 "dma_device_id": "system", 00:41:12.996 "dma_device_type": 1 00:41:12.996 }, 00:41:12.996 { 00:41:12.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:12.996 "dma_device_type": 2 00:41:12.996 } 00:41:12.996 ], 00:41:12.996 "driver_specific": {} 00:41:12.996 }' 00:41:12.996 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:13.254 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:13.254 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:13.254 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:13.254 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:13.254 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:13.254 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:13.512 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:13.512 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:13.512 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:13.512 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:13.512 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:13.512 09:07:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:41:13.770 [2024-07-12 09:07:48.900547] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:14.028 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:14.293 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:14.293 "name": "Existed_Raid", 00:41:14.293 "uuid": "65750b93-1f43-4107-a5f2-fb5b932bcc16", 00:41:14.293 "strip_size_kb": 0, 00:41:14.293 "state": "online", 00:41:14.293 "raid_level": "raid1", 00:41:14.293 "superblock": true, 00:41:14.293 "num_base_bdevs": 2, 00:41:14.293 "num_base_bdevs_discovered": 1, 00:41:14.293 "num_base_bdevs_operational": 1, 00:41:14.293 "base_bdevs_list": [ 00:41:14.293 { 00:41:14.293 "name": null, 00:41:14.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:14.293 "is_configured": false, 00:41:14.293 "data_offset": 256, 00:41:14.293 "data_size": 7936 00:41:14.293 }, 00:41:14.293 { 00:41:14.294 "name": "BaseBdev2", 00:41:14.294 "uuid": "59f9da3b-f0fa-4483-9a8d-f9ec387ed882", 00:41:14.294 "is_configured": true, 00:41:14.294 "data_offset": 256, 00:41:14.294 "data_size": 7936 00:41:14.294 } 00:41:14.294 ] 00:41:14.294 }' 00:41:14.294 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:14.294 09:07:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:14.861 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:41:14.861 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:41:14.861 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:14.861 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:41:15.427 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:41:15.427 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:15.427 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:41:15.427 [2024-07-12 09:07:50.546201] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:15.427 [2024-07-12 09:07:50.546352] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:15.684 [2024-07-12 09:07:50.630993] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:15.684 [2024-07-12 09:07:50.631064] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:15.684 [2024-07-12 09:07:50.631076] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:41:15.684 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:41:15.684 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:41:15.685 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:15.685 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 165792 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 165792 ']' 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 165792 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 165792 00:41:15.942 killing process with pid 165792 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 165792' 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 165792 00:41:15.942 09:07:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 165792 00:41:15.942 [2024-07-12 09:07:50.997644] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:15.942 [2024-07-12 09:07:50.997765] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:17.313 ************************************ 00:41:17.313 END TEST raid_state_function_test_sb_md_interleaved 00:41:17.313 ************************************ 00:41:17.313 09:07:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:41:17.313 00:41:17.313 real 0m13.559s 00:41:17.313 user 0m24.166s 00:41:17.313 sys 0m1.629s 00:41:17.313 09:07:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:17.313 09:07:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:17.313 09:07:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:41:17.313 09:07:52 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:41:17.313 09:07:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:41:17.314 09:07:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:17.314 09:07:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:17.314 ************************************ 00:41:17.314 START TEST raid_superblock_test_md_interleaved 00:41:17.314 ************************************ 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=166190 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 166190 /var/tmp/spdk-raid.sock 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 166190 ']' 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:17.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:17.314 09:07:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:17.314 [2024-07-12 09:07:52.233961] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:41:17.314 [2024-07-12 09:07:52.234143] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166190 ] 00:41:17.314 [2024-07-12 09:07:52.392583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:17.571 [2024-07-12 09:07:52.609597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.829 [2024-07-12 09:07:52.808075] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:18.087 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:41:18.344 malloc1 00:41:18.344 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:18.602 [2024-07-12 09:07:53.715409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:18.602 [2024-07-12 09:07:53.715563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:18.602 [2024-07-12 09:07:53.715609] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:41:18.602 [2024-07-12 09:07:53.715633] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:18.602 [2024-07-12 09:07:53.717927] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:18.602 [2024-07-12 09:07:53.717990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:18.602 pt1 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:18.602 09:07:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:41:18.859 malloc2 00:41:18.859 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:19.116 [2024-07-12 09:07:54.253950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:19.116 [2024-07-12 09:07:54.254116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:19.116 [2024-07-12 09:07:54.254172] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:41:19.116 [2024-07-12 09:07:54.254195] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:19.116 [2024-07-12 09:07:54.256521] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:19.116 [2024-07-12 09:07:54.256601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:19.116 pt2 00:41:19.116 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:41:19.116 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:41:19.116 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:41:19.375 [2024-07-12 09:07:54.558203] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:19.375 [2024-07-12 09:07:54.560799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:19.375 [2024-07-12 09:07:54.561088] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:41:19.375 [2024-07-12 09:07:54.561107] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:19.375 [2024-07-12 09:07:54.561239] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:41:19.375 [2024-07-12 09:07:54.561324] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:41:19.375 [2024-07-12 09:07:54.561337] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:41:19.375 [2024-07-12 09:07:54.561418] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:19.632 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:19.890 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:19.890 "name": "raid_bdev1", 00:41:19.890 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:19.890 "strip_size_kb": 0, 00:41:19.890 "state": "online", 00:41:19.890 "raid_level": "raid1", 00:41:19.890 "superblock": true, 00:41:19.890 "num_base_bdevs": 2, 00:41:19.890 "num_base_bdevs_discovered": 2, 00:41:19.890 "num_base_bdevs_operational": 2, 00:41:19.890 "base_bdevs_list": [ 00:41:19.890 { 00:41:19.891 "name": "pt1", 00:41:19.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:19.891 "is_configured": true, 00:41:19.891 "data_offset": 256, 00:41:19.891 "data_size": 7936 00:41:19.891 }, 00:41:19.891 { 00:41:19.891 "name": "pt2", 00:41:19.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:19.891 "is_configured": true, 00:41:19.891 "data_offset": 256, 00:41:19.891 "data_size": 7936 00:41:19.891 } 00:41:19.891 ] 00:41:19.891 }' 00:41:19.891 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:19.891 09:07:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:20.457 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:41:20.715 [2024-07-12 09:07:55.838571] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:20.715 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:41:20.715 "name": "raid_bdev1", 00:41:20.715 "aliases": [ 00:41:20.715 "ba0482ce-3541-4c65-b148-525a9e1e818e" 00:41:20.715 ], 00:41:20.715 "product_name": "Raid Volume", 00:41:20.715 "block_size": 4128, 00:41:20.715 "num_blocks": 7936, 00:41:20.715 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:20.715 "md_size": 32, 00:41:20.715 "md_interleave": true, 00:41:20.715 "dif_type": 0, 00:41:20.715 "assigned_rate_limits": { 00:41:20.715 "rw_ios_per_sec": 0, 00:41:20.715 "rw_mbytes_per_sec": 0, 00:41:20.715 "r_mbytes_per_sec": 0, 00:41:20.715 "w_mbytes_per_sec": 0 00:41:20.715 }, 00:41:20.715 "claimed": false, 00:41:20.715 "zoned": false, 00:41:20.715 "supported_io_types": { 00:41:20.715 "read": true, 00:41:20.715 "write": true, 00:41:20.715 "unmap": false, 00:41:20.715 "flush": false, 00:41:20.715 "reset": true, 00:41:20.715 "nvme_admin": false, 00:41:20.715 "nvme_io": false, 00:41:20.715 "nvme_io_md": false, 00:41:20.715 "write_zeroes": true, 00:41:20.715 "zcopy": false, 00:41:20.715 "get_zone_info": false, 00:41:20.715 "zone_management": false, 00:41:20.715 "zone_append": false, 00:41:20.715 "compare": false, 00:41:20.715 "compare_and_write": false, 00:41:20.715 "abort": false, 00:41:20.715 "seek_hole": false, 00:41:20.715 "seek_data": false, 00:41:20.715 "copy": false, 00:41:20.715 "nvme_iov_md": false 00:41:20.715 }, 00:41:20.715 "memory_domains": [ 00:41:20.715 { 00:41:20.715 "dma_device_id": "system", 00:41:20.715 "dma_device_type": 1 00:41:20.715 }, 00:41:20.715 { 00:41:20.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:20.715 "dma_device_type": 2 00:41:20.715 }, 00:41:20.715 { 00:41:20.715 "dma_device_id": "system", 00:41:20.715 "dma_device_type": 1 00:41:20.715 }, 00:41:20.715 { 00:41:20.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:20.715 "dma_device_type": 2 00:41:20.715 } 00:41:20.715 ], 00:41:20.715 "driver_specific": { 00:41:20.715 "raid": { 00:41:20.715 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:20.715 "strip_size_kb": 0, 00:41:20.715 "state": "online", 00:41:20.715 "raid_level": "raid1", 00:41:20.715 "superblock": true, 00:41:20.715 "num_base_bdevs": 2, 00:41:20.715 "num_base_bdevs_discovered": 2, 00:41:20.715 "num_base_bdevs_operational": 2, 00:41:20.715 "base_bdevs_list": [ 00:41:20.715 { 00:41:20.715 "name": "pt1", 00:41:20.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:20.715 "is_configured": true, 00:41:20.715 "data_offset": 256, 00:41:20.715 "data_size": 7936 00:41:20.715 }, 00:41:20.715 { 00:41:20.715 "name": "pt2", 00:41:20.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:20.715 "is_configured": true, 00:41:20.715 "data_offset": 256, 00:41:20.715 "data_size": 7936 00:41:20.715 } 00:41:20.715 ] 00:41:20.715 } 00:41:20.715 } 00:41:20.715 }' 00:41:20.715 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:20.973 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:41:20.973 pt2' 00:41:20.973 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:20.973 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:41:20.973 09:07:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:21.230 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:21.230 "name": "pt1", 00:41:21.230 "aliases": [ 00:41:21.230 "00000000-0000-0000-0000-000000000001" 00:41:21.230 ], 00:41:21.230 "product_name": "passthru", 00:41:21.230 "block_size": 4128, 00:41:21.230 "num_blocks": 8192, 00:41:21.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:21.230 "md_size": 32, 00:41:21.230 "md_interleave": true, 00:41:21.230 "dif_type": 0, 00:41:21.230 "assigned_rate_limits": { 00:41:21.230 "rw_ios_per_sec": 0, 00:41:21.230 "rw_mbytes_per_sec": 0, 00:41:21.230 "r_mbytes_per_sec": 0, 00:41:21.230 "w_mbytes_per_sec": 0 00:41:21.230 }, 00:41:21.230 "claimed": true, 00:41:21.230 "claim_type": "exclusive_write", 00:41:21.230 "zoned": false, 00:41:21.230 "supported_io_types": { 00:41:21.230 "read": true, 00:41:21.230 "write": true, 00:41:21.230 "unmap": true, 00:41:21.230 "flush": true, 00:41:21.230 "reset": true, 00:41:21.230 "nvme_admin": false, 00:41:21.230 "nvme_io": false, 00:41:21.230 "nvme_io_md": false, 00:41:21.230 "write_zeroes": true, 00:41:21.230 "zcopy": true, 00:41:21.230 "get_zone_info": false, 00:41:21.230 "zone_management": false, 00:41:21.230 "zone_append": false, 00:41:21.230 "compare": false, 00:41:21.230 "compare_and_write": false, 00:41:21.230 "abort": true, 00:41:21.230 "seek_hole": false, 00:41:21.230 "seek_data": false, 00:41:21.230 "copy": true, 00:41:21.230 "nvme_iov_md": false 00:41:21.230 }, 00:41:21.230 "memory_domains": [ 00:41:21.230 { 00:41:21.230 "dma_device_id": "system", 00:41:21.230 "dma_device_type": 1 00:41:21.230 }, 00:41:21.230 { 00:41:21.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:21.231 "dma_device_type": 2 00:41:21.231 } 00:41:21.231 ], 00:41:21.231 "driver_specific": { 00:41:21.231 "passthru": { 00:41:21.231 "name": "pt1", 00:41:21.231 "base_bdev_name": "malloc1" 00:41:21.231 } 00:41:21.231 } 00:41:21.231 }' 00:41:21.231 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:21.231 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:21.231 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:21.231 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:21.551 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:21.551 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:21.551 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:21.551 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:21.551 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:21.551 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:21.551 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:21.809 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:21.809 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:21.809 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:41:21.809 09:07:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:22.066 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:22.066 "name": "pt2", 00:41:22.066 "aliases": [ 00:41:22.066 "00000000-0000-0000-0000-000000000002" 00:41:22.066 ], 00:41:22.066 "product_name": "passthru", 00:41:22.066 "block_size": 4128, 00:41:22.066 "num_blocks": 8192, 00:41:22.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:22.066 "md_size": 32, 00:41:22.066 "md_interleave": true, 00:41:22.066 "dif_type": 0, 00:41:22.066 "assigned_rate_limits": { 00:41:22.066 "rw_ios_per_sec": 0, 00:41:22.066 "rw_mbytes_per_sec": 0, 00:41:22.066 "r_mbytes_per_sec": 0, 00:41:22.066 "w_mbytes_per_sec": 0 00:41:22.066 }, 00:41:22.066 "claimed": true, 00:41:22.066 "claim_type": "exclusive_write", 00:41:22.066 "zoned": false, 00:41:22.067 "supported_io_types": { 00:41:22.067 "read": true, 00:41:22.067 "write": true, 00:41:22.067 "unmap": true, 00:41:22.067 "flush": true, 00:41:22.067 "reset": true, 00:41:22.067 "nvme_admin": false, 00:41:22.067 "nvme_io": false, 00:41:22.067 "nvme_io_md": false, 00:41:22.067 "write_zeroes": true, 00:41:22.067 "zcopy": true, 00:41:22.067 "get_zone_info": false, 00:41:22.067 "zone_management": false, 00:41:22.067 "zone_append": false, 00:41:22.067 "compare": false, 00:41:22.067 "compare_and_write": false, 00:41:22.067 "abort": true, 00:41:22.067 "seek_hole": false, 00:41:22.067 "seek_data": false, 00:41:22.067 "copy": true, 00:41:22.067 "nvme_iov_md": false 00:41:22.067 }, 00:41:22.067 "memory_domains": [ 00:41:22.067 { 00:41:22.067 "dma_device_id": "system", 00:41:22.067 "dma_device_type": 1 00:41:22.067 }, 00:41:22.067 { 00:41:22.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:22.067 "dma_device_type": 2 00:41:22.067 } 00:41:22.067 ], 00:41:22.067 "driver_specific": { 00:41:22.067 "passthru": { 00:41:22.067 "name": "pt2", 00:41:22.067 "base_bdev_name": "malloc2" 00:41:22.067 } 00:41:22.067 } 00:41:22.067 }' 00:41:22.067 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:22.067 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:22.067 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:22.067 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:22.067 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:22.067 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:22.067 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:22.324 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:22.324 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:22.324 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:22.324 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:22.324 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:22.324 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:22.324 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:41:22.583 [2024-07-12 09:07:57.715128] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:22.583 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ba0482ce-3541-4c65-b148-525a9e1e818e 00:41:22.583 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z ba0482ce-3541-4c65-b148-525a9e1e818e ']' 00:41:22.583 09:07:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:23.148 [2024-07-12 09:07:58.042755] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:23.148 [2024-07-12 09:07:58.043044] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:23.148 [2024-07-12 09:07:58.043261] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:23.148 [2024-07-12 09:07:58.043461] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:23.148 [2024-07-12 09:07:58.043583] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:41:23.148 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:23.148 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:41:23.148 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:41:23.148 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:41:23.148 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:41:23.148 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:41:23.406 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:41:23.406 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:41:23.664 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:41:23.664 09:07:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:41:23.922 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:41:23.922 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:41:23.922 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:41:23.922 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:41:23.922 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:41:23.923 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:41:24.490 [2024-07-12 09:07:59.431050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:41:24.490 [2024-07-12 09:07:59.433508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:41:24.490 [2024-07-12 09:07:59.433742] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:41:24.490 [2024-07-12 09:07:59.433989] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:41:24.490 [2024-07-12 09:07:59.434153] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:24.490 [2024-07-12 09:07:59.434195] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:41:24.490 request: 00:41:24.490 { 00:41:24.490 "name": "raid_bdev1", 00:41:24.490 "raid_level": "raid1", 00:41:24.490 "base_bdevs": [ 00:41:24.490 "malloc1", 00:41:24.490 "malloc2" 00:41:24.490 ], 00:41:24.490 "superblock": false, 00:41:24.490 "method": "bdev_raid_create", 00:41:24.490 "req_id": 1 00:41:24.490 } 00:41:24.490 Got JSON-RPC error response 00:41:24.490 response: 00:41:24.490 { 00:41:24.490 "code": -17, 00:41:24.490 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:41:24.490 } 00:41:24.490 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:41:24.490 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:24.490 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:24.490 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:24.490 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:24.490 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:41:24.750 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:41:24.750 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:41:24.750 09:07:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:25.009 [2024-07-12 09:08:00.047078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:25.009 [2024-07-12 09:08:00.047404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:25.009 [2024-07-12 09:08:00.047557] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:41:25.009 [2024-07-12 09:08:00.047683] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:25.009 [2024-07-12 09:08:00.050084] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:25.009 [2024-07-12 09:08:00.050278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:25.009 [2024-07-12 09:08:00.050474] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:25.009 [2024-07-12 09:08:00.050629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:25.009 pt1 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:25.009 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:25.267 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:25.267 "name": "raid_bdev1", 00:41:25.267 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:25.267 "strip_size_kb": 0, 00:41:25.267 "state": "configuring", 00:41:25.267 "raid_level": "raid1", 00:41:25.267 "superblock": true, 00:41:25.267 "num_base_bdevs": 2, 00:41:25.267 "num_base_bdevs_discovered": 1, 00:41:25.267 "num_base_bdevs_operational": 2, 00:41:25.267 "base_bdevs_list": [ 00:41:25.267 { 00:41:25.267 "name": "pt1", 00:41:25.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:25.267 "is_configured": true, 00:41:25.267 "data_offset": 256, 00:41:25.267 "data_size": 7936 00:41:25.267 }, 00:41:25.267 { 00:41:25.267 "name": null, 00:41:25.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:25.267 "is_configured": false, 00:41:25.267 "data_offset": 256, 00:41:25.267 "data_size": 7936 00:41:25.267 } 00:41:25.267 ] 00:41:25.267 }' 00:41:25.267 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:25.267 09:08:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:26.248 [2024-07-12 09:08:01.319369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:26.248 [2024-07-12 09:08:01.319739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:26.248 [2024-07-12 09:08:01.319904] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:26.248 [2024-07-12 09:08:01.320029] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:26.248 [2024-07-12 09:08:01.320377] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:26.248 [2024-07-12 09:08:01.320567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:26.248 [2024-07-12 09:08:01.320752] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:26.248 [2024-07-12 09:08:01.320815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:26.248 [2024-07-12 09:08:01.321022] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:41:26.248 [2024-07-12 09:08:01.321179] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:26.248 [2024-07-12 09:08:01.321307] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:41:26.248 [2024-07-12 09:08:01.321495] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:41:26.248 [2024-07-12 09:08:01.321614] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:41:26.248 [2024-07-12 09:08:01.321777] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:26.248 pt2 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:26.248 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:26.507 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:26.507 "name": "raid_bdev1", 00:41:26.507 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:26.507 "strip_size_kb": 0, 00:41:26.507 "state": "online", 00:41:26.507 "raid_level": "raid1", 00:41:26.507 "superblock": true, 00:41:26.507 "num_base_bdevs": 2, 00:41:26.507 "num_base_bdevs_discovered": 2, 00:41:26.507 "num_base_bdevs_operational": 2, 00:41:26.507 "base_bdevs_list": [ 00:41:26.507 { 00:41:26.507 "name": "pt1", 00:41:26.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:26.507 "is_configured": true, 00:41:26.507 "data_offset": 256, 00:41:26.507 "data_size": 7936 00:41:26.507 }, 00:41:26.507 { 00:41:26.507 "name": "pt2", 00:41:26.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:26.507 "is_configured": true, 00:41:26.507 "data_offset": 256, 00:41:26.507 "data_size": 7936 00:41:26.507 } 00:41:26.507 ] 00:41:26.507 }' 00:41:26.507 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:26.507 09:08:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:27.438 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:41:27.438 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:41:27.438 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:41:27.438 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:41:27.438 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:41:27.439 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:41:27.439 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:27.439 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:41:27.439 [2024-07-12 09:08:02.587934] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:27.439 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:41:27.439 "name": "raid_bdev1", 00:41:27.439 "aliases": [ 00:41:27.439 "ba0482ce-3541-4c65-b148-525a9e1e818e" 00:41:27.439 ], 00:41:27.439 "product_name": "Raid Volume", 00:41:27.439 "block_size": 4128, 00:41:27.439 "num_blocks": 7936, 00:41:27.439 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:27.439 "md_size": 32, 00:41:27.439 "md_interleave": true, 00:41:27.439 "dif_type": 0, 00:41:27.439 "assigned_rate_limits": { 00:41:27.439 "rw_ios_per_sec": 0, 00:41:27.439 "rw_mbytes_per_sec": 0, 00:41:27.439 "r_mbytes_per_sec": 0, 00:41:27.439 "w_mbytes_per_sec": 0 00:41:27.439 }, 00:41:27.439 "claimed": false, 00:41:27.439 "zoned": false, 00:41:27.439 "supported_io_types": { 00:41:27.439 "read": true, 00:41:27.439 "write": true, 00:41:27.439 "unmap": false, 00:41:27.439 "flush": false, 00:41:27.439 "reset": true, 00:41:27.439 "nvme_admin": false, 00:41:27.439 "nvme_io": false, 00:41:27.439 "nvme_io_md": false, 00:41:27.439 "write_zeroes": true, 00:41:27.439 "zcopy": false, 00:41:27.439 "get_zone_info": false, 00:41:27.439 "zone_management": false, 00:41:27.439 "zone_append": false, 00:41:27.439 "compare": false, 00:41:27.439 "compare_and_write": false, 00:41:27.439 "abort": false, 00:41:27.439 "seek_hole": false, 00:41:27.439 "seek_data": false, 00:41:27.439 "copy": false, 00:41:27.439 "nvme_iov_md": false 00:41:27.439 }, 00:41:27.439 "memory_domains": [ 00:41:27.439 { 00:41:27.439 "dma_device_id": "system", 00:41:27.439 "dma_device_type": 1 00:41:27.439 }, 00:41:27.439 { 00:41:27.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:27.439 "dma_device_type": 2 00:41:27.439 }, 00:41:27.439 { 00:41:27.439 "dma_device_id": "system", 00:41:27.439 "dma_device_type": 1 00:41:27.439 }, 00:41:27.439 { 00:41:27.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:27.439 "dma_device_type": 2 00:41:27.439 } 00:41:27.439 ], 00:41:27.439 "driver_specific": { 00:41:27.439 "raid": { 00:41:27.439 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:27.439 "strip_size_kb": 0, 00:41:27.439 "state": "online", 00:41:27.439 "raid_level": "raid1", 00:41:27.439 "superblock": true, 00:41:27.439 "num_base_bdevs": 2, 00:41:27.439 "num_base_bdevs_discovered": 2, 00:41:27.439 "num_base_bdevs_operational": 2, 00:41:27.439 "base_bdevs_list": [ 00:41:27.439 { 00:41:27.439 "name": "pt1", 00:41:27.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:27.439 "is_configured": true, 00:41:27.439 "data_offset": 256, 00:41:27.439 "data_size": 7936 00:41:27.439 }, 00:41:27.439 { 00:41:27.439 "name": "pt2", 00:41:27.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:27.439 "is_configured": true, 00:41:27.439 "data_offset": 256, 00:41:27.439 "data_size": 7936 00:41:27.439 } 00:41:27.439 ] 00:41:27.439 } 00:41:27.439 } 00:41:27.439 }' 00:41:27.439 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:27.696 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:41:27.696 pt2' 00:41:27.696 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:27.696 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:27.696 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:41:27.954 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:27.954 "name": "pt1", 00:41:27.954 "aliases": [ 00:41:27.954 "00000000-0000-0000-0000-000000000001" 00:41:27.954 ], 00:41:27.954 "product_name": "passthru", 00:41:27.954 "block_size": 4128, 00:41:27.954 "num_blocks": 8192, 00:41:27.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:27.954 "md_size": 32, 00:41:27.954 "md_interleave": true, 00:41:27.954 "dif_type": 0, 00:41:27.954 "assigned_rate_limits": { 00:41:27.954 "rw_ios_per_sec": 0, 00:41:27.954 "rw_mbytes_per_sec": 0, 00:41:27.954 "r_mbytes_per_sec": 0, 00:41:27.954 "w_mbytes_per_sec": 0 00:41:27.954 }, 00:41:27.954 "claimed": true, 00:41:27.954 "claim_type": "exclusive_write", 00:41:27.954 "zoned": false, 00:41:27.954 "supported_io_types": { 00:41:27.954 "read": true, 00:41:27.954 "write": true, 00:41:27.954 "unmap": true, 00:41:27.954 "flush": true, 00:41:27.954 "reset": true, 00:41:27.954 "nvme_admin": false, 00:41:27.954 "nvme_io": false, 00:41:27.955 "nvme_io_md": false, 00:41:27.955 "write_zeroes": true, 00:41:27.955 "zcopy": true, 00:41:27.955 "get_zone_info": false, 00:41:27.955 "zone_management": false, 00:41:27.955 "zone_append": false, 00:41:27.955 "compare": false, 00:41:27.955 "compare_and_write": false, 00:41:27.955 "abort": true, 00:41:27.955 "seek_hole": false, 00:41:27.955 "seek_data": false, 00:41:27.955 "copy": true, 00:41:27.955 "nvme_iov_md": false 00:41:27.955 }, 00:41:27.955 "memory_domains": [ 00:41:27.955 { 00:41:27.955 "dma_device_id": "system", 00:41:27.955 "dma_device_type": 1 00:41:27.955 }, 00:41:27.955 { 00:41:27.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:27.955 "dma_device_type": 2 00:41:27.955 } 00:41:27.955 ], 00:41:27.955 "driver_specific": { 00:41:27.955 "passthru": { 00:41:27.955 "name": "pt1", 00:41:27.955 "base_bdev_name": "malloc1" 00:41:27.955 } 00:41:27.955 } 00:41:27.955 }' 00:41:27.955 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:27.955 09:08:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:27.955 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:27.955 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:27.955 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:27.955 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:27.955 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:41:28.212 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:41:28.776 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:41:28.776 "name": "pt2", 00:41:28.776 "aliases": [ 00:41:28.776 "00000000-0000-0000-0000-000000000002" 00:41:28.776 ], 00:41:28.776 "product_name": "passthru", 00:41:28.776 "block_size": 4128, 00:41:28.776 "num_blocks": 8192, 00:41:28.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:28.776 "md_size": 32, 00:41:28.776 "md_interleave": true, 00:41:28.776 "dif_type": 0, 00:41:28.776 "assigned_rate_limits": { 00:41:28.776 "rw_ios_per_sec": 0, 00:41:28.776 "rw_mbytes_per_sec": 0, 00:41:28.776 "r_mbytes_per_sec": 0, 00:41:28.776 "w_mbytes_per_sec": 0 00:41:28.776 }, 00:41:28.776 "claimed": true, 00:41:28.776 "claim_type": "exclusive_write", 00:41:28.776 "zoned": false, 00:41:28.776 "supported_io_types": { 00:41:28.776 "read": true, 00:41:28.776 "write": true, 00:41:28.776 "unmap": true, 00:41:28.776 "flush": true, 00:41:28.776 "reset": true, 00:41:28.776 "nvme_admin": false, 00:41:28.776 "nvme_io": false, 00:41:28.776 "nvme_io_md": false, 00:41:28.776 "write_zeroes": true, 00:41:28.776 "zcopy": true, 00:41:28.776 "get_zone_info": false, 00:41:28.776 "zone_management": false, 00:41:28.776 "zone_append": false, 00:41:28.776 "compare": false, 00:41:28.776 "compare_and_write": false, 00:41:28.776 "abort": true, 00:41:28.776 "seek_hole": false, 00:41:28.776 "seek_data": false, 00:41:28.776 "copy": true, 00:41:28.776 "nvme_iov_md": false 00:41:28.776 }, 00:41:28.776 "memory_domains": [ 00:41:28.776 { 00:41:28.776 "dma_device_id": "system", 00:41:28.776 "dma_device_type": 1 00:41:28.776 }, 00:41:28.776 { 00:41:28.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:28.776 "dma_device_type": 2 00:41:28.776 } 00:41:28.776 ], 00:41:28.776 "driver_specific": { 00:41:28.776 "passthru": { 00:41:28.776 "name": "pt2", 00:41:28.776 "base_bdev_name": "malloc2" 00:41:28.776 } 00:41:28.776 } 00:41:28.776 }' 00:41:28.776 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:28.776 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:41:28.776 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:41:28.776 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:28.777 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:41:28.777 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:41:28.777 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:29.035 09:08:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:41:29.035 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:41:29.035 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:29.035 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:41:29.035 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:41:29.035 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:29.035 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:41:29.294 [2024-07-12 09:08:04.424404] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:29.294 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' ba0482ce-3541-4c65-b148-525a9e1e818e '!=' ba0482ce-3541-4c65-b148-525a9e1e818e ']' 00:41:29.294 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:41:29.295 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:41:29.295 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:41:29.295 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:41:29.552 [2024-07-12 09:08:04.716194] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:41:29.552 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:29.552 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:29.553 09:08:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:30.143 09:08:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:30.143 "name": "raid_bdev1", 00:41:30.143 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:30.143 "strip_size_kb": 0, 00:41:30.143 "state": "online", 00:41:30.143 "raid_level": "raid1", 00:41:30.143 "superblock": true, 00:41:30.143 "num_base_bdevs": 2, 00:41:30.143 "num_base_bdevs_discovered": 1, 00:41:30.143 "num_base_bdevs_operational": 1, 00:41:30.143 "base_bdevs_list": [ 00:41:30.143 { 00:41:30.143 "name": null, 00:41:30.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.143 "is_configured": false, 00:41:30.143 "data_offset": 256, 00:41:30.143 "data_size": 7936 00:41:30.143 }, 00:41:30.143 { 00:41:30.143 "name": "pt2", 00:41:30.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:30.143 "is_configured": true, 00:41:30.143 "data_offset": 256, 00:41:30.143 "data_size": 7936 00:41:30.143 } 00:41:30.143 ] 00:41:30.143 }' 00:41:30.143 09:08:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:30.143 09:08:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:30.709 09:08:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:30.967 [2024-07-12 09:08:05.968441] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:30.967 [2024-07-12 09:08:05.968720] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:30.967 [2024-07-12 09:08:05.968920] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:30.967 [2024-07-12 09:08:05.969113] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:30.967 [2024-07-12 09:08:05.969234] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:41:30.967 09:08:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:30.967 09:08:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:41:31.226 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:41:31.226 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:41:31.226 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:41:31.226 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:41:31.226 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:41:31.485 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:41:31.485 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:41:31.485 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:41:31.485 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:41:31.485 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:41:31.485 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:31.743 [2024-07-12 09:08:06.744564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:31.743 [2024-07-12 09:08:06.744884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:31.743 [2024-07-12 09:08:06.745056] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:41:31.743 [2024-07-12 09:08:06.745198] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:31.743 [2024-07-12 09:08:06.747480] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:31.743 [2024-07-12 09:08:06.747671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:31.743 [2024-07-12 09:08:06.747872] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:31.743 [2024-07-12 09:08:06.748047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:31.743 [2024-07-12 09:08:06.748253] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:41:31.743 [2024-07-12 09:08:06.748405] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:31.743 [2024-07-12 09:08:06.748529] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:41:31.743 [2024-07-12 09:08:06.748750] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:41:31.743 [2024-07-12 09:08:06.748867] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:41:31.743 [2024-07-12 09:08:06.749083] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:31.743 pt2 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:31.743 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:31.744 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:31.744 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:31.744 09:08:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:32.001 09:08:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:32.001 "name": "raid_bdev1", 00:41:32.001 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:32.001 "strip_size_kb": 0, 00:41:32.001 "state": "online", 00:41:32.001 "raid_level": "raid1", 00:41:32.001 "superblock": true, 00:41:32.001 "num_base_bdevs": 2, 00:41:32.001 "num_base_bdevs_discovered": 1, 00:41:32.001 "num_base_bdevs_operational": 1, 00:41:32.001 "base_bdevs_list": [ 00:41:32.001 { 00:41:32.001 "name": null, 00:41:32.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:32.001 "is_configured": false, 00:41:32.001 "data_offset": 256, 00:41:32.001 "data_size": 7936 00:41:32.001 }, 00:41:32.001 { 00:41:32.001 "name": "pt2", 00:41:32.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:32.001 "is_configured": true, 00:41:32.001 "data_offset": 256, 00:41:32.001 "data_size": 7936 00:41:32.001 } 00:41:32.001 ] 00:41:32.001 }' 00:41:32.001 09:08:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:32.001 09:08:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:32.567 09:08:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:32.824 [2024-07-12 09:08:07.961391] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:32.824 [2024-07-12 09:08:07.961633] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:32.824 [2024-07-12 09:08:07.961833] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:32.824 [2024-07-12 09:08:07.962022] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:32.824 [2024-07-12 09:08:07.962140] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:41:32.824 09:08:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:32.824 09:08:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:41:33.083 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:41:33.083 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:41:33.083 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:41:33.083 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:33.341 [2024-07-12 09:08:08.449510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:33.341 [2024-07-12 09:08:08.449825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:33.341 [2024-07-12 09:08:08.449992] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:41:33.341 [2024-07-12 09:08:08.450118] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:33.341 [2024-07-12 09:08:08.452425] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:33.341 [2024-07-12 09:08:08.452622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:33.341 [2024-07-12 09:08:08.452831] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:33.341 [2024-07-12 09:08:08.453008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:33.341 [2024-07-12 09:08:08.453270] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:41:33.341 [2024-07-12 09:08:08.453392] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:33.341 [2024-07-12 09:08:08.453447] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:41:33.341 [2024-07-12 09:08:08.453702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:33.341 [2024-07-12 09:08:08.453910] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:41:33.341 [2024-07-12 09:08:08.454033] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:33.341 [2024-07-12 09:08:08.454149] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:41:33.341 [2024-07-12 09:08:08.454329] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:41:33.341 [2024-07-12 09:08:08.454437] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:41:33.341 [2024-07-12 09:08:08.454678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:33.341 pt1 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:33.341 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:33.600 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:33.600 "name": "raid_bdev1", 00:41:33.600 "uuid": "ba0482ce-3541-4c65-b148-525a9e1e818e", 00:41:33.600 "strip_size_kb": 0, 00:41:33.600 "state": "online", 00:41:33.600 "raid_level": "raid1", 00:41:33.600 "superblock": true, 00:41:33.600 "num_base_bdevs": 2, 00:41:33.600 "num_base_bdevs_discovered": 1, 00:41:33.600 "num_base_bdevs_operational": 1, 00:41:33.600 "base_bdevs_list": [ 00:41:33.600 { 00:41:33.600 "name": null, 00:41:33.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:33.600 "is_configured": false, 00:41:33.600 "data_offset": 256, 00:41:33.600 "data_size": 7936 00:41:33.600 }, 00:41:33.600 { 00:41:33.600 "name": "pt2", 00:41:33.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:33.600 "is_configured": true, 00:41:33.600 "data_offset": 256, 00:41:33.600 "data_size": 7936 00:41:33.600 } 00:41:33.600 ] 00:41:33.600 }' 00:41:33.600 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:33.600 09:08:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:34.532 09:08:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:41:34.532 09:08:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:41:34.532 09:08:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:41:34.532 09:08:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:41:34.532 09:08:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:35.097 [2024-07-12 09:08:09.994690] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' ba0482ce-3541-4c65-b148-525a9e1e818e '!=' ba0482ce-3541-4c65-b148-525a9e1e818e ']' 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 166190 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 166190 ']' 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 166190 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166190 00:41:35.097 killing process with pid 166190 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166190' 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 166190 00:41:35.097 09:08:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 166190 00:41:35.097 [2024-07-12 09:08:10.035253] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:35.097 [2024-07-12 09:08:10.035358] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:35.097 [2024-07-12 09:08:10.035419] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:35.097 [2024-07-12 09:08:10.035431] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:41:35.097 [2024-07-12 09:08:10.203578] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:36.504 ************************************ 00:41:36.504 END TEST raid_superblock_test_md_interleaved 00:41:36.504 ************************************ 00:41:36.504 09:08:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:41:36.504 00:41:36.504 real 0m19.159s 00:41:36.504 user 0m35.338s 00:41:36.504 sys 0m2.236s 00:41:36.504 09:08:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:36.504 09:08:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:36.504 09:08:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:41:36.504 09:08:11 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:41:36.504 09:08:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:41:36.504 09:08:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:36.504 09:08:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:36.504 ************************************ 00:41:36.504 START TEST raid_rebuild_test_sb_md_interleaved 00:41:36.504 ************************************ 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=166767 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 166767 /var/tmp/spdk-raid.sock 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 166767 ']' 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:36.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:36.504 09:08:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:36.504 [2024-07-12 09:08:11.467580] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:41:36.504 [2024-07-12 09:08:11.468083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166767 ] 00:41:36.504 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:36.504 Zero copy mechanism will not be used. 00:41:36.504 [2024-07-12 09:08:11.647532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:36.761 [2024-07-12 09:08:11.882392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:37.018 [2024-07-12 09:08:12.081575] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:37.583 09:08:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:37.583 09:08:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:41:37.583 09:08:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:41:37.583 09:08:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:41:37.583 BaseBdev1_malloc 00:41:37.841 09:08:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:37.841 [2024-07-12 09:08:13.029798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:37.841 [2024-07-12 09:08:13.030137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:37.841 [2024-07-12 09:08:13.030220] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:41:37.841 [2024-07-12 09:08:13.030461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:37.841 [2024-07-12 09:08:13.032742] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:37.841 [2024-07-12 09:08:13.032903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:37.841 BaseBdev1 00:41:38.098 09:08:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:41:38.098 09:08:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:41:38.355 BaseBdev2_malloc 00:41:38.355 09:08:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:38.613 [2024-07-12 09:08:13.572746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:38.613 [2024-07-12 09:08:13.573132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:38.613 [2024-07-12 09:08:13.573295] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:41:38.613 [2024-07-12 09:08:13.573428] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:38.613 [2024-07-12 09:08:13.575641] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:38.613 [2024-07-12 09:08:13.575797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:38.613 BaseBdev2 00:41:38.613 09:08:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:41:38.870 spare_malloc 00:41:38.870 09:08:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:39.127 spare_delay 00:41:39.127 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:39.445 [2024-07-12 09:08:14.359632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:39.445 [2024-07-12 09:08:14.359963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:39.445 [2024-07-12 09:08:14.360042] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:41:39.445 [2024-07-12 09:08:14.360309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:39.445 [2024-07-12 09:08:14.362711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:39.445 [2024-07-12 09:08:14.362874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:39.445 spare 00:41:39.445 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:41:39.445 [2024-07-12 09:08:14.599799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:39.445 [2024-07-12 09:08:14.602266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:39.445 [2024-07-12 09:08:14.602656] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:41:39.445 [2024-07-12 09:08:14.602791] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:39.445 [2024-07-12 09:08:14.602960] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:41:39.445 [2024-07-12 09:08:14.603156] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:41:39.445 [2024-07-12 09:08:14.603259] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:41:39.445 [2024-07-12 09:08:14.603415] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:39.445 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:39.445 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:39.446 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:39.720 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:39.720 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:39.720 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:39.720 "name": "raid_bdev1", 00:41:39.720 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:39.720 "strip_size_kb": 0, 00:41:39.720 "state": "online", 00:41:39.720 "raid_level": "raid1", 00:41:39.720 "superblock": true, 00:41:39.720 "num_base_bdevs": 2, 00:41:39.720 "num_base_bdevs_discovered": 2, 00:41:39.720 "num_base_bdevs_operational": 2, 00:41:39.720 "base_bdevs_list": [ 00:41:39.720 { 00:41:39.720 "name": "BaseBdev1", 00:41:39.720 "uuid": "10bc12dd-3e71-56ed-9b06-fd986272fcbf", 00:41:39.720 "is_configured": true, 00:41:39.720 "data_offset": 256, 00:41:39.720 "data_size": 7936 00:41:39.720 }, 00:41:39.720 { 00:41:39.720 "name": "BaseBdev2", 00:41:39.720 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:39.720 "is_configured": true, 00:41:39.720 "data_offset": 256, 00:41:39.720 "data_size": 7936 00:41:39.720 } 00:41:39.720 ] 00:41:39.720 }' 00:41:39.720 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:39.720 09:08:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:40.654 09:08:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:40.654 09:08:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:41:40.654 [2024-07-12 09:08:15.804366] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:40.654 09:08:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:41:40.654 09:08:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:40.654 09:08:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:41:41.221 [2024-07-12 09:08:16.336116] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:41.221 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:41.479 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:41.479 "name": "raid_bdev1", 00:41:41.479 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:41.479 "strip_size_kb": 0, 00:41:41.479 "state": "online", 00:41:41.479 "raid_level": "raid1", 00:41:41.479 "superblock": true, 00:41:41.479 "num_base_bdevs": 2, 00:41:41.479 "num_base_bdevs_discovered": 1, 00:41:41.479 "num_base_bdevs_operational": 1, 00:41:41.479 "base_bdevs_list": [ 00:41:41.479 { 00:41:41.479 "name": null, 00:41:41.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:41.479 "is_configured": false, 00:41:41.479 "data_offset": 256, 00:41:41.479 "data_size": 7936 00:41:41.479 }, 00:41:41.479 { 00:41:41.479 "name": "BaseBdev2", 00:41:41.479 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:41.479 "is_configured": true, 00:41:41.479 "data_offset": 256, 00:41:41.479 "data_size": 7936 00:41:41.479 } 00:41:41.479 ] 00:41:41.479 }' 00:41:41.479 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:41.479 09:08:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:42.413 09:08:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:42.413 [2024-07-12 09:08:17.576444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:42.413 [2024-07-12 09:08:17.592269] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:41:42.413 [2024-07-12 09:08:17.594708] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:42.671 09:08:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:41:43.608 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:43.608 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:43.608 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:43.608 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:43.608 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:43.608 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:43.608 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:43.984 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:43.984 "name": "raid_bdev1", 00:41:43.984 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:43.984 "strip_size_kb": 0, 00:41:43.984 "state": "online", 00:41:43.984 "raid_level": "raid1", 00:41:43.984 "superblock": true, 00:41:43.984 "num_base_bdevs": 2, 00:41:43.984 "num_base_bdevs_discovered": 2, 00:41:43.984 "num_base_bdevs_operational": 2, 00:41:43.984 "process": { 00:41:43.984 "type": "rebuild", 00:41:43.984 "target": "spare", 00:41:43.984 "progress": { 00:41:43.984 "blocks": 3328, 00:41:43.984 "percent": 41 00:41:43.984 } 00:41:43.984 }, 00:41:43.984 "base_bdevs_list": [ 00:41:43.984 { 00:41:43.984 "name": "spare", 00:41:43.984 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:43.984 "is_configured": true, 00:41:43.984 "data_offset": 256, 00:41:43.984 "data_size": 7936 00:41:43.984 }, 00:41:43.984 { 00:41:43.984 "name": "BaseBdev2", 00:41:43.984 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:43.984 "is_configured": true, 00:41:43.984 "data_offset": 256, 00:41:43.984 "data_size": 7936 00:41:43.984 } 00:41:43.984 ] 00:41:43.984 }' 00:41:43.984 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:43.984 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:43.984 09:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:43.984 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:43.984 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:44.242 [2024-07-12 09:08:19.316785] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:44.242 [2024-07-12 09:08:19.408140] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:44.242 [2024-07-12 09:08:19.408522] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:44.242 [2024-07-12 09:08:19.408670] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:44.242 [2024-07-12 09:08:19.408730] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:44.501 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:44.759 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:44.759 "name": "raid_bdev1", 00:41:44.759 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:44.759 "strip_size_kb": 0, 00:41:44.759 "state": "online", 00:41:44.759 "raid_level": "raid1", 00:41:44.759 "superblock": true, 00:41:44.759 "num_base_bdevs": 2, 00:41:44.759 "num_base_bdevs_discovered": 1, 00:41:44.759 "num_base_bdevs_operational": 1, 00:41:44.760 "base_bdevs_list": [ 00:41:44.760 { 00:41:44.760 "name": null, 00:41:44.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:44.760 "is_configured": false, 00:41:44.760 "data_offset": 256, 00:41:44.760 "data_size": 7936 00:41:44.760 }, 00:41:44.760 { 00:41:44.760 "name": "BaseBdev2", 00:41:44.760 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:44.760 "is_configured": true, 00:41:44.760 "data_offset": 256, 00:41:44.760 "data_size": 7936 00:41:44.760 } 00:41:44.760 ] 00:41:44.760 }' 00:41:44.760 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:44.760 09:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:45.327 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:45.327 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:45.327 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:45.327 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:45.327 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:45.327 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:45.327 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:45.585 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:45.585 "name": "raid_bdev1", 00:41:45.585 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:45.585 "strip_size_kb": 0, 00:41:45.585 "state": "online", 00:41:45.586 "raid_level": "raid1", 00:41:45.586 "superblock": true, 00:41:45.586 "num_base_bdevs": 2, 00:41:45.586 "num_base_bdevs_discovered": 1, 00:41:45.586 "num_base_bdevs_operational": 1, 00:41:45.586 "base_bdevs_list": [ 00:41:45.586 { 00:41:45.586 "name": null, 00:41:45.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:45.586 "is_configured": false, 00:41:45.586 "data_offset": 256, 00:41:45.586 "data_size": 7936 00:41:45.586 }, 00:41:45.586 { 00:41:45.586 "name": "BaseBdev2", 00:41:45.586 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:45.586 "is_configured": true, 00:41:45.586 "data_offset": 256, 00:41:45.586 "data_size": 7936 00:41:45.586 } 00:41:45.586 ] 00:41:45.586 }' 00:41:45.586 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:45.844 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:45.844 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:45.844 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:45.844 09:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:46.103 [2024-07-12 09:08:21.075758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:46.103 [2024-07-12 09:08:21.090185] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:41:46.103 [2024-07-12 09:08:21.092606] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:46.103 09:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:47.038 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:47.038 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:47.038 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:47.038 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:47.038 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:47.038 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:47.038 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.297 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:47.297 "name": "raid_bdev1", 00:41:47.297 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:47.297 "strip_size_kb": 0, 00:41:47.297 "state": "online", 00:41:47.297 "raid_level": "raid1", 00:41:47.297 "superblock": true, 00:41:47.297 "num_base_bdevs": 2, 00:41:47.297 "num_base_bdevs_discovered": 2, 00:41:47.297 "num_base_bdevs_operational": 2, 00:41:47.297 "process": { 00:41:47.297 "type": "rebuild", 00:41:47.297 "target": "spare", 00:41:47.297 "progress": { 00:41:47.297 "blocks": 3072, 00:41:47.297 "percent": 38 00:41:47.297 } 00:41:47.297 }, 00:41:47.297 "base_bdevs_list": [ 00:41:47.297 { 00:41:47.297 "name": "spare", 00:41:47.297 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:47.297 "is_configured": true, 00:41:47.297 "data_offset": 256, 00:41:47.297 "data_size": 7936 00:41:47.297 }, 00:41:47.297 { 00:41:47.297 "name": "BaseBdev2", 00:41:47.297 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:47.297 "is_configured": true, 00:41:47.297 "data_offset": 256, 00:41:47.297 "data_size": 7936 00:41:47.297 } 00:41:47.297 ] 00:41:47.298 }' 00:41:47.298 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:47.298 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:47.298 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:41:47.556 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1601 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:47.556 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.815 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:47.815 "name": "raid_bdev1", 00:41:47.815 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:47.815 "strip_size_kb": 0, 00:41:47.815 "state": "online", 00:41:47.815 "raid_level": "raid1", 00:41:47.815 "superblock": true, 00:41:47.815 "num_base_bdevs": 2, 00:41:47.815 "num_base_bdevs_discovered": 2, 00:41:47.815 "num_base_bdevs_operational": 2, 00:41:47.815 "process": { 00:41:47.815 "type": "rebuild", 00:41:47.815 "target": "spare", 00:41:47.815 "progress": { 00:41:47.815 "blocks": 4096, 00:41:47.815 "percent": 51 00:41:47.815 } 00:41:47.815 }, 00:41:47.815 "base_bdevs_list": [ 00:41:47.815 { 00:41:47.815 "name": "spare", 00:41:47.815 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:47.815 "is_configured": true, 00:41:47.815 "data_offset": 256, 00:41:47.815 "data_size": 7936 00:41:47.815 }, 00:41:47.815 { 00:41:47.815 "name": "BaseBdev2", 00:41:47.815 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:47.815 "is_configured": true, 00:41:47.815 "data_offset": 256, 00:41:47.815 "data_size": 7936 00:41:47.815 } 00:41:47.815 ] 00:41:47.815 }' 00:41:47.815 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:47.815 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:47.815 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:47.815 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:47.815 09:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:48.750 09:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:49.010 09:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:49.010 "name": "raid_bdev1", 00:41:49.010 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:49.010 "strip_size_kb": 0, 00:41:49.010 "state": "online", 00:41:49.010 "raid_level": "raid1", 00:41:49.010 "superblock": true, 00:41:49.010 "num_base_bdevs": 2, 00:41:49.010 "num_base_bdevs_discovered": 2, 00:41:49.010 "num_base_bdevs_operational": 2, 00:41:49.010 "process": { 00:41:49.010 "type": "rebuild", 00:41:49.010 "target": "spare", 00:41:49.010 "progress": { 00:41:49.010 "blocks": 7680, 00:41:49.010 "percent": 96 00:41:49.010 } 00:41:49.010 }, 00:41:49.010 "base_bdevs_list": [ 00:41:49.010 { 00:41:49.010 "name": "spare", 00:41:49.010 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:49.010 "is_configured": true, 00:41:49.010 "data_offset": 256, 00:41:49.010 "data_size": 7936 00:41:49.010 }, 00:41:49.010 { 00:41:49.010 "name": "BaseBdev2", 00:41:49.010 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:49.010 "is_configured": true, 00:41:49.010 "data_offset": 256, 00:41:49.010 "data_size": 7936 00:41:49.010 } 00:41:49.010 ] 00:41:49.010 }' 00:41:49.010 09:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:49.281 [2024-07-12 09:08:24.215157] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:49.282 [2024-07-12 09:08:24.215420] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:49.282 [2024-07-12 09:08:24.215708] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:49.282 09:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:49.282 09:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:49.282 09:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:49.282 09:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:50.233 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:50.491 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:50.491 "name": "raid_bdev1", 00:41:50.491 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:50.491 "strip_size_kb": 0, 00:41:50.491 "state": "online", 00:41:50.491 "raid_level": "raid1", 00:41:50.491 "superblock": true, 00:41:50.491 "num_base_bdevs": 2, 00:41:50.491 "num_base_bdevs_discovered": 2, 00:41:50.491 "num_base_bdevs_operational": 2, 00:41:50.491 "base_bdevs_list": [ 00:41:50.491 { 00:41:50.491 "name": "spare", 00:41:50.491 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:50.491 "is_configured": true, 00:41:50.491 "data_offset": 256, 00:41:50.491 "data_size": 7936 00:41:50.491 }, 00:41:50.491 { 00:41:50.491 "name": "BaseBdev2", 00:41:50.491 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:50.491 "is_configured": true, 00:41:50.491 "data_offset": 256, 00:41:50.491 "data_size": 7936 00:41:50.491 } 00:41:50.491 ] 00:41:50.491 }' 00:41:50.491 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:50.491 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:50.491 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:50.750 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:51.010 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:51.010 "name": "raid_bdev1", 00:41:51.010 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:51.010 "strip_size_kb": 0, 00:41:51.010 "state": "online", 00:41:51.010 "raid_level": "raid1", 00:41:51.010 "superblock": true, 00:41:51.010 "num_base_bdevs": 2, 00:41:51.010 "num_base_bdevs_discovered": 2, 00:41:51.010 "num_base_bdevs_operational": 2, 00:41:51.010 "base_bdevs_list": [ 00:41:51.010 { 00:41:51.010 "name": "spare", 00:41:51.010 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:51.010 "is_configured": true, 00:41:51.010 "data_offset": 256, 00:41:51.010 "data_size": 7936 00:41:51.010 }, 00:41:51.010 { 00:41:51.010 "name": "BaseBdev2", 00:41:51.010 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:51.010 "is_configured": true, 00:41:51.010 "data_offset": 256, 00:41:51.010 "data_size": 7936 00:41:51.010 } 00:41:51.010 ] 00:41:51.010 }' 00:41:51.010 09:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:51.010 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:51.268 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:51.268 "name": "raid_bdev1", 00:41:51.268 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:51.268 "strip_size_kb": 0, 00:41:51.268 "state": "online", 00:41:51.268 "raid_level": "raid1", 00:41:51.268 "superblock": true, 00:41:51.268 "num_base_bdevs": 2, 00:41:51.268 "num_base_bdevs_discovered": 2, 00:41:51.268 "num_base_bdevs_operational": 2, 00:41:51.268 "base_bdevs_list": [ 00:41:51.268 { 00:41:51.268 "name": "spare", 00:41:51.268 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:51.268 "is_configured": true, 00:41:51.268 "data_offset": 256, 00:41:51.268 "data_size": 7936 00:41:51.268 }, 00:41:51.268 { 00:41:51.268 "name": "BaseBdev2", 00:41:51.268 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:51.268 "is_configured": true, 00:41:51.268 "data_offset": 256, 00:41:51.268 "data_size": 7936 00:41:51.268 } 00:41:51.268 ] 00:41:51.268 }' 00:41:51.268 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:51.268 09:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:52.206 09:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:52.464 [2024-07-12 09:08:27.419220] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:52.464 [2024-07-12 09:08:27.419444] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:52.464 [2024-07-12 09:08:27.419656] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:52.464 [2024-07-12 09:08:27.419852] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:52.464 [2024-07-12 09:08:27.419998] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:41:52.464 09:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:52.464 09:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:41:52.723 09:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:41:52.723 09:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:41:52.723 09:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:41:52.723 09:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:41:52.981 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:53.238 [2024-07-12 09:08:28.319390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:53.238 [2024-07-12 09:08:28.319739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:53.239 [2024-07-12 09:08:28.319930] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:41:53.239 [2024-07-12 09:08:28.320058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:53.239 [2024-07-12 09:08:28.322586] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:53.239 [2024-07-12 09:08:28.322756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:53.239 [2024-07-12 09:08:28.322948] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:53.239 [2024-07-12 09:08:28.323129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:53.239 [2024-07-12 09:08:28.323396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:53.239 spare 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:53.239 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:53.239 [2024-07-12 09:08:28.423667] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:41:53.239 [2024-07-12 09:08:28.423924] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:41:53.239 [2024-07-12 09:08:28.424173] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:41:53.239 [2024-07-12 09:08:28.424451] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:41:53.239 [2024-07-12 09:08:28.424565] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:41:53.239 [2024-07-12 09:08:28.424744] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:53.497 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:53.497 "name": "raid_bdev1", 00:41:53.497 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:53.497 "strip_size_kb": 0, 00:41:53.497 "state": "online", 00:41:53.497 "raid_level": "raid1", 00:41:53.497 "superblock": true, 00:41:53.497 "num_base_bdevs": 2, 00:41:53.497 "num_base_bdevs_discovered": 2, 00:41:53.497 "num_base_bdevs_operational": 2, 00:41:53.497 "base_bdevs_list": [ 00:41:53.497 { 00:41:53.497 "name": "spare", 00:41:53.497 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:53.497 "is_configured": true, 00:41:53.497 "data_offset": 256, 00:41:53.497 "data_size": 7936 00:41:53.497 }, 00:41:53.497 { 00:41:53.497 "name": "BaseBdev2", 00:41:53.497 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:53.497 "is_configured": true, 00:41:53.497 "data_offset": 256, 00:41:53.497 "data_size": 7936 00:41:53.497 } 00:41:53.497 ] 00:41:53.497 }' 00:41:53.497 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:53.497 09:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:54.436 "name": "raid_bdev1", 00:41:54.436 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:54.436 "strip_size_kb": 0, 00:41:54.436 "state": "online", 00:41:54.436 "raid_level": "raid1", 00:41:54.436 "superblock": true, 00:41:54.436 "num_base_bdevs": 2, 00:41:54.436 "num_base_bdevs_discovered": 2, 00:41:54.436 "num_base_bdevs_operational": 2, 00:41:54.436 "base_bdevs_list": [ 00:41:54.436 { 00:41:54.436 "name": "spare", 00:41:54.436 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:54.436 "is_configured": true, 00:41:54.436 "data_offset": 256, 00:41:54.436 "data_size": 7936 00:41:54.436 }, 00:41:54.436 { 00:41:54.436 "name": "BaseBdev2", 00:41:54.436 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:54.436 "is_configured": true, 00:41:54.436 "data_offset": 256, 00:41:54.436 "data_size": 7936 00:41:54.436 } 00:41:54.436 ] 00:41:54.436 }' 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:41:54.436 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:54.694 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:54.694 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:54.694 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:41:54.951 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:41:54.951 09:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:55.209 [2024-07-12 09:08:30.176178] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:55.209 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:55.468 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:55.468 "name": "raid_bdev1", 00:41:55.468 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:55.468 "strip_size_kb": 0, 00:41:55.468 "state": "online", 00:41:55.468 "raid_level": "raid1", 00:41:55.468 "superblock": true, 00:41:55.468 "num_base_bdevs": 2, 00:41:55.468 "num_base_bdevs_discovered": 1, 00:41:55.468 "num_base_bdevs_operational": 1, 00:41:55.468 "base_bdevs_list": [ 00:41:55.468 { 00:41:55.468 "name": null, 00:41:55.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:55.468 "is_configured": false, 00:41:55.468 "data_offset": 256, 00:41:55.468 "data_size": 7936 00:41:55.468 }, 00:41:55.468 { 00:41:55.468 "name": "BaseBdev2", 00:41:55.468 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:55.468 "is_configured": true, 00:41:55.468 "data_offset": 256, 00:41:55.468 "data_size": 7936 00:41:55.468 } 00:41:55.468 ] 00:41:55.468 }' 00:41:55.468 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:55.468 09:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:56.035 09:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:56.294 [2024-07-12 09:08:31.479077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:56.294 [2024-07-12 09:08:31.479579] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:56.294 [2024-07-12 09:08:31.479708] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:56.294 [2024-07-12 09:08:31.479838] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:56.553 [2024-07-12 09:08:31.494648] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:41:56.553 [2024-07-12 09:08:31.497024] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:56.553 09:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:41:57.487 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:57.488 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:41:57.488 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:41:57.488 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:41:57.488 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:41:57.488 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:57.488 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:57.746 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:57.746 "name": "raid_bdev1", 00:41:57.746 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:57.746 "strip_size_kb": 0, 00:41:57.746 "state": "online", 00:41:57.746 "raid_level": "raid1", 00:41:57.746 "superblock": true, 00:41:57.746 "num_base_bdevs": 2, 00:41:57.746 "num_base_bdevs_discovered": 2, 00:41:57.746 "num_base_bdevs_operational": 2, 00:41:57.746 "process": { 00:41:57.746 "type": "rebuild", 00:41:57.746 "target": "spare", 00:41:57.746 "progress": { 00:41:57.746 "blocks": 3072, 00:41:57.746 "percent": 38 00:41:57.746 } 00:41:57.746 }, 00:41:57.746 "base_bdevs_list": [ 00:41:57.746 { 00:41:57.747 "name": "spare", 00:41:57.747 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:41:57.747 "is_configured": true, 00:41:57.747 "data_offset": 256, 00:41:57.747 "data_size": 7936 00:41:57.747 }, 00:41:57.747 { 00:41:57.747 "name": "BaseBdev2", 00:41:57.747 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:57.747 "is_configured": true, 00:41:57.747 "data_offset": 256, 00:41:57.747 "data_size": 7936 00:41:57.747 } 00:41:57.747 ] 00:41:57.747 }' 00:41:57.747 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:41:57.747 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:57.747 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:41:57.747 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:41:57.747 09:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:41:58.006 [2024-07-12 09:08:33.151027] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:58.266 [2024-07-12 09:08:33.209488] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:58.266 [2024-07-12 09:08:33.209825] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:58.266 [2024-07-12 09:08:33.209978] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:58.266 [2024-07-12 09:08:33.210023] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:58.266 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:58.524 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:41:58.524 "name": "raid_bdev1", 00:41:58.524 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:41:58.524 "strip_size_kb": 0, 00:41:58.524 "state": "online", 00:41:58.524 "raid_level": "raid1", 00:41:58.524 "superblock": true, 00:41:58.524 "num_base_bdevs": 2, 00:41:58.524 "num_base_bdevs_discovered": 1, 00:41:58.524 "num_base_bdevs_operational": 1, 00:41:58.524 "base_bdevs_list": [ 00:41:58.524 { 00:41:58.524 "name": null, 00:41:58.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:58.524 "is_configured": false, 00:41:58.524 "data_offset": 256, 00:41:58.524 "data_size": 7936 00:41:58.524 }, 00:41:58.524 { 00:41:58.524 "name": "BaseBdev2", 00:41:58.524 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:41:58.524 "is_configured": true, 00:41:58.524 "data_offset": 256, 00:41:58.524 "data_size": 7936 00:41:58.524 } 00:41:58.524 ] 00:41:58.524 }' 00:41:58.524 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:41:58.524 09:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:41:59.091 09:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:59.350 [2024-07-12 09:08:34.455342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:59.350 [2024-07-12 09:08:34.455588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:59.350 [2024-07-12 09:08:34.455765] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:41:59.350 [2024-07-12 09:08:34.455906] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:59.350 [2024-07-12 09:08:34.456325] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:59.350 [2024-07-12 09:08:34.456477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:59.350 [2024-07-12 09:08:34.456664] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:59.350 [2024-07-12 09:08:34.456780] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:59.350 [2024-07-12 09:08:34.456914] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:59.350 [2024-07-12 09:08:34.457002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:59.350 [2024-07-12 09:08:34.471677] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:41:59.350 spare 00:41:59.350 [2024-07-12 09:08:34.474177] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:59.350 09:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:00.726 "name": "raid_bdev1", 00:42:00.726 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:42:00.726 "strip_size_kb": 0, 00:42:00.726 "state": "online", 00:42:00.726 "raid_level": "raid1", 00:42:00.726 "superblock": true, 00:42:00.726 "num_base_bdevs": 2, 00:42:00.726 "num_base_bdevs_discovered": 2, 00:42:00.726 "num_base_bdevs_operational": 2, 00:42:00.726 "process": { 00:42:00.726 "type": "rebuild", 00:42:00.726 "target": "spare", 00:42:00.726 "progress": { 00:42:00.726 "blocks": 3072, 00:42:00.726 "percent": 38 00:42:00.726 } 00:42:00.726 }, 00:42:00.726 "base_bdevs_list": [ 00:42:00.726 { 00:42:00.726 "name": "spare", 00:42:00.726 "uuid": "1081b79b-9088-5e06-8fb0-138085dd1646", 00:42:00.726 "is_configured": true, 00:42:00.726 "data_offset": 256, 00:42:00.726 "data_size": 7936 00:42:00.726 }, 00:42:00.726 { 00:42:00.726 "name": "BaseBdev2", 00:42:00.726 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:42:00.726 "is_configured": true, 00:42:00.726 "data_offset": 256, 00:42:00.726 "data_size": 7936 00:42:00.726 } 00:42:00.726 ] 00:42:00.726 }' 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:42:00.726 09:08:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:42:00.985 [2024-07-12 09:08:36.108525] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:01.243 [2024-07-12 09:08:36.186402] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:01.243 [2024-07-12 09:08:36.186760] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:01.243 [2024-07-12 09:08:36.186907] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:01.243 [2024-07-12 09:08:36.187045] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:01.243 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:01.501 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:01.501 "name": "raid_bdev1", 00:42:01.501 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:42:01.501 "strip_size_kb": 0, 00:42:01.501 "state": "online", 00:42:01.501 "raid_level": "raid1", 00:42:01.501 "superblock": true, 00:42:01.501 "num_base_bdevs": 2, 00:42:01.501 "num_base_bdevs_discovered": 1, 00:42:01.501 "num_base_bdevs_operational": 1, 00:42:01.501 "base_bdevs_list": [ 00:42:01.501 { 00:42:01.501 "name": null, 00:42:01.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:01.501 "is_configured": false, 00:42:01.501 "data_offset": 256, 00:42:01.501 "data_size": 7936 00:42:01.501 }, 00:42:01.501 { 00:42:01.501 "name": "BaseBdev2", 00:42:01.501 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:42:01.501 "is_configured": true, 00:42:01.501 "data_offset": 256, 00:42:01.501 "data_size": 7936 00:42:01.501 } 00:42:01.501 ] 00:42:01.501 }' 00:42:01.501 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:01.501 09:08:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:02.068 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:02.068 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:02.068 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:02.068 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:02.068 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:02.068 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:02.068 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:02.636 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:02.636 "name": "raid_bdev1", 00:42:02.636 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:42:02.636 "strip_size_kb": 0, 00:42:02.636 "state": "online", 00:42:02.636 "raid_level": "raid1", 00:42:02.636 "superblock": true, 00:42:02.636 "num_base_bdevs": 2, 00:42:02.636 "num_base_bdevs_discovered": 1, 00:42:02.636 "num_base_bdevs_operational": 1, 00:42:02.636 "base_bdevs_list": [ 00:42:02.636 { 00:42:02.636 "name": null, 00:42:02.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:02.636 "is_configured": false, 00:42:02.636 "data_offset": 256, 00:42:02.636 "data_size": 7936 00:42:02.636 }, 00:42:02.636 { 00:42:02.636 "name": "BaseBdev2", 00:42:02.636 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:42:02.636 "is_configured": true, 00:42:02.636 "data_offset": 256, 00:42:02.636 "data_size": 7936 00:42:02.636 } 00:42:02.636 ] 00:42:02.636 }' 00:42:02.636 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:02.636 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:02.636 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:02.636 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:02.636 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:42:02.895 09:08:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:03.153 [2024-07-12 09:08:38.208673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:03.153 [2024-07-12 09:08:38.208948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:03.153 [2024-07-12 09:08:38.209115] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:42:03.153 [2024-07-12 09:08:38.209232] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:03.153 [2024-07-12 09:08:38.209539] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:03.153 [2024-07-12 09:08:38.209676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:03.153 [2024-07-12 09:08:38.209872] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:03.153 [2024-07-12 09:08:38.209989] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:03.153 [2024-07-12 09:08:38.210079] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:03.153 BaseBdev1 00:42:03.153 09:08:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:04.089 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:04.347 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:04.347 "name": "raid_bdev1", 00:42:04.347 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:42:04.347 "strip_size_kb": 0, 00:42:04.347 "state": "online", 00:42:04.347 "raid_level": "raid1", 00:42:04.347 "superblock": true, 00:42:04.347 "num_base_bdevs": 2, 00:42:04.347 "num_base_bdevs_discovered": 1, 00:42:04.347 "num_base_bdevs_operational": 1, 00:42:04.347 "base_bdevs_list": [ 00:42:04.347 { 00:42:04.347 "name": null, 00:42:04.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:04.347 "is_configured": false, 00:42:04.347 "data_offset": 256, 00:42:04.347 "data_size": 7936 00:42:04.347 }, 00:42:04.347 { 00:42:04.347 "name": "BaseBdev2", 00:42:04.347 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:42:04.347 "is_configured": true, 00:42:04.347 "data_offset": 256, 00:42:04.347 "data_size": 7936 00:42:04.347 } 00:42:04.347 ] 00:42:04.347 }' 00:42:04.347 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:04.347 09:08:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:05.281 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:05.281 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:05.281 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:05.281 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:05.281 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:05.281 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:05.281 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:05.539 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:05.539 "name": "raid_bdev1", 00:42:05.539 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:42:05.539 "strip_size_kb": 0, 00:42:05.539 "state": "online", 00:42:05.539 "raid_level": "raid1", 00:42:05.539 "superblock": true, 00:42:05.539 "num_base_bdevs": 2, 00:42:05.539 "num_base_bdevs_discovered": 1, 00:42:05.539 "num_base_bdevs_operational": 1, 00:42:05.539 "base_bdevs_list": [ 00:42:05.539 { 00:42:05.539 "name": null, 00:42:05.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:05.539 "is_configured": false, 00:42:05.539 "data_offset": 256, 00:42:05.539 "data_size": 7936 00:42:05.539 }, 00:42:05.539 { 00:42:05.539 "name": "BaseBdev2", 00:42:05.539 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:42:05.539 "is_configured": true, 00:42:05.539 "data_offset": 256, 00:42:05.539 "data_size": 7936 00:42:05.539 } 00:42:05.539 ] 00:42:05.539 }' 00:42:05.539 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:05.539 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:05.539 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:42:05.797 09:08:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:06.055 [2024-07-12 09:08:41.133488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:06.055 [2024-07-12 09:08:41.133962] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:06.055 [2024-07-12 09:08:41.134092] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:06.055 request: 00:42:06.055 { 00:42:06.055 "base_bdev": "BaseBdev1", 00:42:06.055 "raid_bdev": "raid_bdev1", 00:42:06.055 "method": "bdev_raid_add_base_bdev", 00:42:06.055 "req_id": 1 00:42:06.055 } 00:42:06.055 Got JSON-RPC error response 00:42:06.055 response: 00:42:06.055 { 00:42:06.055 "code": -22, 00:42:06.055 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:42:06.055 } 00:42:06.055 09:08:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:42:06.055 09:08:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:06.055 09:08:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:06.055 09:08:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:06.055 09:08:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:07.035 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:07.291 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:42:07.291 "name": "raid_bdev1", 00:42:07.291 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:42:07.291 "strip_size_kb": 0, 00:42:07.291 "state": "online", 00:42:07.291 "raid_level": "raid1", 00:42:07.291 "superblock": true, 00:42:07.291 "num_base_bdevs": 2, 00:42:07.291 "num_base_bdevs_discovered": 1, 00:42:07.291 "num_base_bdevs_operational": 1, 00:42:07.291 "base_bdevs_list": [ 00:42:07.291 { 00:42:07.291 "name": null, 00:42:07.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:07.291 "is_configured": false, 00:42:07.291 "data_offset": 256, 00:42:07.292 "data_size": 7936 00:42:07.292 }, 00:42:07.292 { 00:42:07.292 "name": "BaseBdev2", 00:42:07.292 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:42:07.292 "is_configured": true, 00:42:07.292 "data_offset": 256, 00:42:07.292 "data_size": 7936 00:42:07.292 } 00:42:07.292 ] 00:42:07.292 }' 00:42:07.292 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:42:07.292 09:08:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:08.223 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:08.223 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:42:08.223 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:42:08.223 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:42:08.223 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:42:08.224 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:08.224 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:08.224 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:08.224 "name": "raid_bdev1", 00:42:08.224 "uuid": "c153255c-aedf-4429-8c63-5411f215df27", 00:42:08.224 "strip_size_kb": 0, 00:42:08.224 "state": "online", 00:42:08.224 "raid_level": "raid1", 00:42:08.224 "superblock": true, 00:42:08.224 "num_base_bdevs": 2, 00:42:08.224 "num_base_bdevs_discovered": 1, 00:42:08.224 "num_base_bdevs_operational": 1, 00:42:08.224 "base_bdevs_list": [ 00:42:08.224 { 00:42:08.224 "name": null, 00:42:08.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:08.224 "is_configured": false, 00:42:08.224 "data_offset": 256, 00:42:08.224 "data_size": 7936 00:42:08.224 }, 00:42:08.224 { 00:42:08.224 "name": "BaseBdev2", 00:42:08.224 "uuid": "9c0d5a0b-d5fa-5662-a8e2-b9298692dcda", 00:42:08.224 "is_configured": true, 00:42:08.224 "data_offset": 256, 00:42:08.224 "data_size": 7936 00:42:08.224 } 00:42:08.224 ] 00:42:08.224 }' 00:42:08.224 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:42:08.224 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:42:08.224 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 166767 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 166767 ']' 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 166767 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166767 00:42:08.481 killing process with pid 166767 00:42:08.481 Received shutdown signal, test time was about 60.000000 seconds 00:42:08.481 00:42:08.481 Latency(us) 00:42:08.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:08.481 =================================================================================================================== 00:42:08.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166767' 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 166767 00:42:08.481 09:08:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 166767 00:42:08.481 [2024-07-12 09:08:43.458741] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:08.481 [2024-07-12 09:08:43.458890] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:08.481 [2024-07-12 09:08:43.459116] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:08.481 [2024-07-12 09:08:43.459237] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:42:08.738 [2024-07-12 09:08:43.717199] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:10.111 ************************************ 00:42:10.111 END TEST raid_rebuild_test_sb_md_interleaved 00:42:10.111 ************************************ 00:42:10.111 09:08:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:42:10.111 00:42:10.111 real 0m33.480s 00:42:10.111 user 0m55.113s 00:42:10.111 sys 0m2.989s 00:42:10.111 09:08:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:10.111 09:08:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:42:10.111 09:08:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:42:10.111 09:08:44 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:42:10.111 09:08:44 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:42:10.111 09:08:44 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 166767 ']' 00:42:10.111 09:08:44 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 166767 00:42:10.111 09:08:44 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:42:10.111 ************************************ 00:42:10.111 END TEST bdev_raid 00:42:10.111 ************************************ 00:42:10.111 00:42:10.111 real 26m33.658s 00:42:10.111 user 45m59.325s 00:42:10.111 sys 2m58.686s 00:42:10.111 09:08:44 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:10.111 09:08:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:10.111 09:08:44 -- common/autotest_common.sh@1142 -- # return 0 00:42:10.111 09:08:44 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:42:10.111 09:08:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:10.111 09:08:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:10.111 09:08:44 -- common/autotest_common.sh@10 -- # set +x 00:42:10.111 ************************************ 00:42:10.111 START TEST bdevperf_config 00:42:10.111 ************************************ 00:42:10.111 09:08:44 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:42:10.111 * Looking for test storage... 00:42:10.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:42:10.111 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:42:10.111 09:08:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:10.112 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:10.112 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:10.112 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:10.112 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:10.112 09:08:45 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:15.372 09:08:49 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-12 09:08:45.130222] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:15.372 [2024-07-12 09:08:45.130428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167674 ] 00:42:15.372 Using job config with 4 jobs 00:42:15.372 [2024-07-12 09:08:45.288776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.372 [2024-07-12 09:08:45.574044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.372 cpumask for '\''job0'\'' is too big 00:42:15.372 cpumask for '\''job1'\'' is too big 00:42:15.372 cpumask for '\''job2'\'' is too big 00:42:15.372 cpumask for '\''job3'\'' is too big 00:42:15.372 Running I/O for 2 seconds... 00:42:15.372 00:42:15.372 Latency(us) 00:42:15.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23227.69 22.68 0.00 0.00 11010.47 1980.97 16801.05 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23206.62 22.66 0.00 0.00 10994.72 1980.97 15371.17 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23186.40 22.64 0.00 0.00 10977.63 2040.55 15252.01 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.03 23261.07 22.72 0.00 0.00 10917.10 934.63 15132.86 00:42:15.372 =================================================================================================================== 00:42:15.372 Total : 92881.79 90.70 0.00 0.00 10974.90 934.63 16801.05' 00:42:15.372 09:08:49 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-12 09:08:45.130222] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:15.372 [2024-07-12 09:08:45.130428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167674 ] 00:42:15.372 Using job config with 4 jobs 00:42:15.372 [2024-07-12 09:08:45.288776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.372 [2024-07-12 09:08:45.574044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.372 cpumask for '\''job0'\'' is too big 00:42:15.372 cpumask for '\''job1'\'' is too big 00:42:15.372 cpumask for '\''job2'\'' is too big 00:42:15.372 cpumask for '\''job3'\'' is too big 00:42:15.372 Running I/O for 2 seconds... 00:42:15.372 00:42:15.372 Latency(us) 00:42:15.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23227.69 22.68 0.00 0.00 11010.47 1980.97 16801.05 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23206.62 22.66 0.00 0.00 10994.72 1980.97 15371.17 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23186.40 22.64 0.00 0.00 10977.63 2040.55 15252.01 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.03 23261.07 22.72 0.00 0.00 10917.10 934.63 15132.86 00:42:15.372 =================================================================================================================== 00:42:15.372 Total : 92881.79 90.70 0.00 0.00 10974.90 934.63 16801.05' 00:42:15.372 09:08:49 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:42:15.372 09:08:49 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:42:15.372 09:08:49 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 09:08:45.130222] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:15.372 [2024-07-12 09:08:45.130428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167674 ] 00:42:15.372 Using job config with 4 jobs 00:42:15.372 [2024-07-12 09:08:45.288776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.372 [2024-07-12 09:08:45.574044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.372 cpumask for '\''job0'\'' is too big 00:42:15.372 cpumask for '\''job1'\'' is too big 00:42:15.372 cpumask for '\''job2'\'' is too big 00:42:15.372 cpumask for '\''job3'\'' is too big 00:42:15.372 Running I/O for 2 seconds... 00:42:15.372 00:42:15.372 Latency(us) 00:42:15.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23227.69 22.68 0.00 0.00 11010.47 1980.97 16801.05 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23206.62 22.66 0.00 0.00 10994.72 1980.97 15371.17 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.02 23186.40 22.64 0.00 0.00 10977.63 2040.55 15252.01 00:42:15.372 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:15.372 Malloc0 : 2.03 23261.07 22.72 0.00 0.00 10917.10 934.63 15132.86 00:42:15.372 =================================================================================================================== 00:42:15.372 Total : 92881.79 90.70 0.00 0.00 10974.90 934.63 16801.05' 00:42:15.372 09:08:49 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:42:15.372 09:08:49 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:15.372 [2024-07-12 09:08:49.571096] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:15.372 [2024-07-12 09:08:49.571598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167755 ] 00:42:15.372 [2024-07-12 09:08:49.734844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.372 [2024-07-12 09:08:50.034224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.372 cpumask for 'job0' is too big 00:42:15.372 cpumask for 'job1' is too big 00:42:15.372 cpumask for 'job2' is too big 00:42:15.372 cpumask for 'job3' is too big 00:42:19.559 09:08:53 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:42:19.559 Running I/O for 2 seconds... 00:42:19.559 00:42:19.559 Latency(us) 00:42:19.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:19.559 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:19.559 Malloc0 : 2.02 23466.12 22.92 0.00 0.00 10897.60 1906.50 16562.73 00:42:19.559 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:19.559 Malloc0 : 2.02 23445.07 22.90 0.00 0.00 10883.54 1906.50 14715.81 00:42:19.559 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:19.559 Malloc0 : 2.02 23424.24 22.88 0.00 0.00 10868.31 1951.19 13941.29 00:42:19.559 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:42:19.559 Malloc0 : 2.02 23403.56 22.86 0.00 0.00 10853.32 1951.19 13941.29 00:42:19.559 =================================================================================================================== 00:42:19.559 Total : 93738.99 91.54 0.00 0.00 10875.69 1906.50 16562.73' 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:19.560 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:19.560 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:19.560 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:19.560 09:08:53 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-12 09:08:54.045916] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:23.772 [2024-07-12 09:08:54.046212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167805 ] 00:42:23.772 Using job config with 3 jobs 00:42:23.772 [2024-07-12 09:08:54.217269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.772 [2024-07-12 09:08:54.492782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.772 cpumask for '\''job0'\'' is too big 00:42:23.772 cpumask for '\''job1'\'' is too big 00:42:23.772 cpumask for '\''job2'\'' is too big 00:42:23.772 Running I/O for 2 seconds... 00:42:23.772 00:42:23.772 Latency(us) 00:42:23.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.01 31200.12 30.47 0.00 0.00 8195.40 1884.16 11677.32 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.02 31213.86 30.48 0.00 0.00 8172.68 1854.37 9889.98 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.02 31186.83 30.46 0.00 0.00 8162.68 1839.48 8877.15 00:42:23.772 =================================================================================================================== 00:42:23.772 Total : 93600.81 91.41 0.00 0.00 8176.90 1839.48 11677.32' 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-12 09:08:54.045916] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:23.772 [2024-07-12 09:08:54.046212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167805 ] 00:42:23.772 Using job config with 3 jobs 00:42:23.772 [2024-07-12 09:08:54.217269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.772 [2024-07-12 09:08:54.492782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.772 cpumask for '\''job0'\'' is too big 00:42:23.772 cpumask for '\''job1'\'' is too big 00:42:23.772 cpumask for '\''job2'\'' is too big 00:42:23.772 Running I/O for 2 seconds... 00:42:23.772 00:42:23.772 Latency(us) 00:42:23.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.01 31200.12 30.47 0.00 0.00 8195.40 1884.16 11677.32 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.02 31213.86 30.48 0.00 0.00 8172.68 1854.37 9889.98 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.02 31186.83 30.46 0.00 0.00 8162.68 1839.48 8877.15 00:42:23.772 =================================================================================================================== 00:42:23.772 Total : 93600.81 91.41 0.00 0.00 8176.90 1839.48 11677.32' 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 09:08:54.045916] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:23.772 [2024-07-12 09:08:54.046212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167805 ] 00:42:23.772 Using job config with 3 jobs 00:42:23.772 [2024-07-12 09:08:54.217269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.772 [2024-07-12 09:08:54.492782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.772 cpumask for '\''job0'\'' is too big 00:42:23.772 cpumask for '\''job1'\'' is too big 00:42:23.772 cpumask for '\''job2'\'' is too big 00:42:23.772 Running I/O for 2 seconds... 00:42:23.772 00:42:23.772 Latency(us) 00:42:23.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.01 31200.12 30.47 0.00 0.00 8195.40 1884.16 11677.32 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.02 31213.86 30.48 0.00 0.00 8172.68 1854.37 9889.98 00:42:23.772 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:42:23.772 Malloc0 : 2.02 31186.83 30.46 0.00 0.00 8162.68 1839.48 8877.15 00:42:23.772 =================================================================================================================== 00:42:23.772 Total : 93600.81 91.41 0.00 0.00 8176.90 1839.48 11677.32' 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:42:23.772 09:08:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:42:23.773 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:23.773 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:23.773 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:23.773 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:42:23.773 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:42:23.773 09:08:58 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-12 09:08:58.505515] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:28.014 [2024-07-12 09:08:58.505757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167882 ] 00:42:28.014 Using job config with 4 jobs 00:42:28.014 [2024-07-12 09:08:58.678447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.014 [2024-07-12 09:08:58.964981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.014 cpumask for '\''job0'\'' is too big 00:42:28.014 cpumask for '\''job1'\'' is too big 00:42:28.014 cpumask for '\''job2'\'' is too big 00:42:28.014 cpumask for '\''job3'\'' is too big 00:42:28.014 Running I/O for 2 seconds... 00:42:28.014 00:42:28.014 Latency(us) 00:42:28.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.03 10837.08 10.58 0.00 0.00 23601.40 4170.47 34317.03 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.03 10826.99 10.57 0.00 0.00 23597.54 4885.41 34555.35 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.04 10817.12 10.56 0.00 0.00 23532.65 4081.11 30742.34 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.04 10805.42 10.55 0.00 0.00 23532.73 4736.47 30980.65 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.05 10847.82 10.59 0.00 0.00 23356.13 4200.26 29669.93 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.06 10837.72 10.58 0.00 0.00 23353.22 4796.04 29669.93 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.06 10827.16 10.57 0.00 0.00 23289.31 4140.68 29669.93 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.06 10817.07 10.56 0.00 0.00 23292.42 4766.25 29908.25 00:42:28.014 =================================================================================================================== 00:42:28.014 Total : 86616.39 84.59 0.00 0.00 23443.72 4081.11 34555.35' 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-12 09:08:58.505515] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:28.014 [2024-07-12 09:08:58.505757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167882 ] 00:42:28.014 Using job config with 4 jobs 00:42:28.014 [2024-07-12 09:08:58.678447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.014 [2024-07-12 09:08:58.964981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.014 cpumask for '\''job0'\'' is too big 00:42:28.014 cpumask for '\''job1'\'' is too big 00:42:28.014 cpumask for '\''job2'\'' is too big 00:42:28.014 cpumask for '\''job3'\'' is too big 00:42:28.014 Running I/O for 2 seconds... 00:42:28.014 00:42:28.014 Latency(us) 00:42:28.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.03 10837.08 10.58 0.00 0.00 23601.40 4170.47 34317.03 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.03 10826.99 10.57 0.00 0.00 23597.54 4885.41 34555.35 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.04 10817.12 10.56 0.00 0.00 23532.65 4081.11 30742.34 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.04 10805.42 10.55 0.00 0.00 23532.73 4736.47 30980.65 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.05 10847.82 10.59 0.00 0.00 23356.13 4200.26 29669.93 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.06 10837.72 10.58 0.00 0.00 23353.22 4796.04 29669.93 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.06 10827.16 10.57 0.00 0.00 23289.31 4140.68 29669.93 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.06 10817.07 10.56 0.00 0.00 23292.42 4766.25 29908.25 00:42:28.014 =================================================================================================================== 00:42:28.014 Total : 86616.39 84.59 0.00 0.00 23443.72 4081.11 34555.35' 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 09:08:58.505515] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:28.014 [2024-07-12 09:08:58.505757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167882 ] 00:42:28.014 Using job config with 4 jobs 00:42:28.014 [2024-07-12 09:08:58.678447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.014 [2024-07-12 09:08:58.964981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.014 cpumask for '\''job0'\'' is too big 00:42:28.014 cpumask for '\''job1'\'' is too big 00:42:28.014 cpumask for '\''job2'\'' is too big 00:42:28.014 cpumask for '\''job3'\'' is too big 00:42:28.014 Running I/O for 2 seconds... 00:42:28.014 00:42:28.014 Latency(us) 00:42:28.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.03 10837.08 10.58 0.00 0.00 23601.40 4170.47 34317.03 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.03 10826.99 10.57 0.00 0.00 23597.54 4885.41 34555.35 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.04 10817.12 10.56 0.00 0.00 23532.65 4081.11 30742.34 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.04 10805.42 10.55 0.00 0.00 23532.73 4736.47 30980.65 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.05 10847.82 10.59 0.00 0.00 23356.13 4200.26 29669.93 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.06 10837.72 10.58 0.00 0.00 23353.22 4796.04 29669.93 00:42:28.014 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc0 : 2.06 10827.16 10.57 0.00 0.00 23289.31 4140.68 29669.93 00:42:28.014 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:42:28.014 Malloc1 : 2.06 10817.07 10.56 0.00 0.00 23292.42 4766.25 29908.25 00:42:28.014 =================================================================================================================== 00:42:28.014 Total : 86616.39 84.59 0.00 0.00 23443.72 4081.11 34555.35' 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:42:28.014 09:09:03 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:42:28.014 ************************************ 00:42:28.014 END TEST bdevperf_config 00:42:28.014 ************************************ 00:42:28.014 00:42:28.014 real 0m18.091s 00:42:28.014 user 0m16.293s 00:42:28.014 sys 0m1.238s 00:42:28.014 09:09:03 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:28.014 09:09:03 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:42:28.014 09:09:03 -- common/autotest_common.sh@1142 -- # return 0 00:42:28.014 09:09:03 -- spdk/autotest.sh@192 -- # uname -s 00:42:28.015 09:09:03 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:42:28.015 09:09:03 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:42:28.015 09:09:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:28.015 09:09:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:28.015 09:09:03 -- common/autotest_common.sh@10 -- # set +x 00:42:28.015 ************************************ 00:42:28.015 START TEST reactor_set_interrupt 00:42:28.015 ************************************ 00:42:28.015 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:42:28.015 * Looking for test storage... 00:42:28.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.015 09:09:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:42:28.015 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:42:28.273 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.273 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.273 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:42:28.273 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:28.273 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:42:28.273 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:42:28.273 09:09:03 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_CET=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES=128 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_DPDK_UADK=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_ASAN=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_SHARED=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_VTUNE_DIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_RDMA_SET_TOS=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_VBDEV_COMPRESS=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VFIO_USER_DIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_PGO_DIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_FUZZER_LIB= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_HAVE_EXECINFO_H=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_USDT=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_URING_ZNS=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_FC_PATH= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_COVERAGE=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_CUSTOMOCF=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_DPDK_PKG_CONFIG=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_DEBUG=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_RDMA=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_HAVE_ARC4RANDOM=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_FUZZER=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_FC=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBARCHIVE=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_DPDK_COMPRESSDEV=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_CROSS_PREFIX= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_PREFIX=/usr/local 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_LIBBSD=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_UBSAN=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_PGO_CAPTURE=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_UBLK=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_ISAL_CRYPTO=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_CRYPTO=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_RBD=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_LIBDIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_IPSEC_MB_DIR= 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_PGO_USE=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_GOLANG=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_VHOST=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_IDXD=y 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_AVAHI=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:42:28.274 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:42:28.274 #define SPDK_CONFIG_H 00:42:28.274 #define SPDK_CONFIG_APPS 1 00:42:28.274 #define SPDK_CONFIG_ARCH native 00:42:28.274 #define SPDK_CONFIG_ASAN 1 00:42:28.274 #undef SPDK_CONFIG_AVAHI 00:42:28.274 #undef SPDK_CONFIG_CET 00:42:28.274 #define SPDK_CONFIG_COVERAGE 1 00:42:28.274 #define SPDK_CONFIG_CROSS_PREFIX 00:42:28.274 #undef SPDK_CONFIG_CRYPTO 00:42:28.274 #undef SPDK_CONFIG_CRYPTO_MLX5 00:42:28.274 #undef SPDK_CONFIG_CUSTOMOCF 00:42:28.274 #undef SPDK_CONFIG_DAOS 00:42:28.274 #define SPDK_CONFIG_DAOS_DIR 00:42:28.274 #define SPDK_CONFIG_DEBUG 1 00:42:28.274 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:42:28.274 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:42:28.274 #define SPDK_CONFIG_DPDK_INC_DIR 00:42:28.274 #define SPDK_CONFIG_DPDK_LIB_DIR 00:42:28.274 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:42:28.274 #undef SPDK_CONFIG_DPDK_UADK 00:42:28.274 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:42:28.274 #define SPDK_CONFIG_EXAMPLES 1 00:42:28.274 #undef SPDK_CONFIG_FC 00:42:28.274 #define SPDK_CONFIG_FC_PATH 00:42:28.274 #define SPDK_CONFIG_FIO_PLUGIN 1 00:42:28.274 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:42:28.274 #undef SPDK_CONFIG_FUSE 00:42:28.274 #undef SPDK_CONFIG_FUZZER 00:42:28.274 #define SPDK_CONFIG_FUZZER_LIB 00:42:28.274 #undef SPDK_CONFIG_GOLANG 00:42:28.274 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:42:28.274 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:42:28.274 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:42:28.274 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:42:28.274 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:42:28.274 #undef SPDK_CONFIG_HAVE_LIBBSD 00:42:28.274 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:42:28.274 #define SPDK_CONFIG_IDXD 1 00:42:28.274 #undef SPDK_CONFIG_IDXD_KERNEL 00:42:28.274 #undef SPDK_CONFIG_IPSEC_MB 00:42:28.274 #define SPDK_CONFIG_IPSEC_MB_DIR 00:42:28.274 #define SPDK_CONFIG_ISAL 1 00:42:28.274 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:42:28.274 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:42:28.274 #define SPDK_CONFIG_LIBDIR 00:42:28.274 #undef SPDK_CONFIG_LTO 00:42:28.274 #define SPDK_CONFIG_MAX_LCORES 128 00:42:28.274 #define SPDK_CONFIG_NVME_CUSE 1 00:42:28.274 #undef SPDK_CONFIG_OCF 00:42:28.274 #define SPDK_CONFIG_OCF_PATH 00:42:28.274 #define SPDK_CONFIG_OPENSSL_PATH 00:42:28.274 #undef SPDK_CONFIG_PGO_CAPTURE 00:42:28.274 #define SPDK_CONFIG_PGO_DIR 00:42:28.274 #undef SPDK_CONFIG_PGO_USE 00:42:28.274 #define SPDK_CONFIG_PREFIX /usr/local 00:42:28.274 #define SPDK_CONFIG_RAID5F 1 00:42:28.274 #undef SPDK_CONFIG_RBD 00:42:28.274 #define SPDK_CONFIG_RDMA 1 00:42:28.274 #define SPDK_CONFIG_RDMA_PROV verbs 00:42:28.274 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:42:28.274 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:42:28.274 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:42:28.274 #undef SPDK_CONFIG_SHARED 00:42:28.274 #undef SPDK_CONFIG_SMA 00:42:28.274 #define SPDK_CONFIG_TESTS 1 00:42:28.274 #undef SPDK_CONFIG_TSAN 00:42:28.274 #undef SPDK_CONFIG_UBLK 00:42:28.274 #define SPDK_CONFIG_UBSAN 1 00:42:28.274 #define SPDK_CONFIG_UNIT_TESTS 1 00:42:28.274 #undef SPDK_CONFIG_URING 00:42:28.274 #define SPDK_CONFIG_URING_PATH 00:42:28.274 #undef SPDK_CONFIG_URING_ZNS 00:42:28.274 #undef SPDK_CONFIG_USDT 00:42:28.274 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:42:28.274 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:42:28.274 #undef SPDK_CONFIG_VFIO_USER 00:42:28.274 #define SPDK_CONFIG_VFIO_USER_DIR 00:42:28.274 #define SPDK_CONFIG_VHOST 1 00:42:28.274 #define SPDK_CONFIG_VIRTIO 1 00:42:28.274 #undef SPDK_CONFIG_VTUNE 00:42:28.274 #define SPDK_CONFIG_VTUNE_DIR 00:42:28.274 #define SPDK_CONFIG_WERROR 1 00:42:28.274 #define SPDK_CONFIG_WPDK_DIR 00:42:28.274 #undef SPDK_CONFIG_XNVME 00:42:28.274 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:42:28.274 09:09:03 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:42:28.274 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:28.274 09:09:03 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:28.274 09:09:03 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:28.274 09:09:03 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:28.274 09:09:03 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:28.275 09:09:03 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:28.275 09:09:03 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:28.275 09:09:03 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:42:28.275 09:09:03 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:42:28.275 09:09:03 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:42:28.275 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 167970 ]] 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 167970 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Efj67o 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.Efj67o/tests/interrupt /tmp/spdk.Efj67o 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=udev 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6224461824 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6224461824 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1249763328 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254514688 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4751360 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=10311258112 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10288758784 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6267850752 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6272561152 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=103089152 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=109422592 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop2 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=41025536 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=41025536 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop1 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:28.276 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=96337920 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=96337920 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1254510592 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254510592 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=93586423808 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6116356096 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop3 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=40763392 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=40763392 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop4 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:42:28.277 * Looking for test storage... 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=10311258112 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=12503351296 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=168019 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:28.277 09:09:03 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 168019 /var/tmp/spdk.sock 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 168019 ']' 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:28.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:28.277 09:09:03 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:28.277 [2024-07-12 09:09:03.395406] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:28.277 [2024-07-12 09:09:03.395859] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168019 ] 00:42:28.535 [2024-07-12 09:09:03.576059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:28.792 [2024-07-12 09:09:03.867042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.792 [2024-07-12 09:09:03.867194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:28.792 [2024-07-12 09:09:03.867424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.049 [2024-07-12 09:09:04.225016] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:29.306 09:09:04 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:29.306 09:09:04 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:42:29.306 09:09:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:42:29.306 09:09:04 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:29.563 Malloc0 00:42:29.563 Malloc1 00:42:29.563 Malloc2 00:42:29.563 09:09:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:42:29.563 09:09:04 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:42:29.563 09:09:04 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:29.563 09:09:04 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:42:29.820 5000+0 records in 00:42:29.820 5000+0 records out 00:42:29.820 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0264954 s, 386 MB/s 00:42:29.820 09:09:04 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:42:30.079 AIO0 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 168019 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 168019 without_thd 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=168019 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:30.079 09:09:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:42:30.336 09:09:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:42:30.337 09:09:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:42:30.595 spdk_thread ids are 1 on reactor0. 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 168019 0 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168019 0 idle 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168019 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168019 -w 256 00:42:30.595 09:09:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168019 root 20 0 20.1t 151236 31424 S 0.0 1.2 0:00.95 reactor_0' 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168019 root 20 0 20.1t 151236 31424 S 0.0 1.2 0:00.95 reactor_0 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 168019 1 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168019 1 idle 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168019 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168019 -w 256 00:42:30.865 09:09:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:42:30.866 09:09:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168031 root 20 0 20.1t 151236 31424 S 0.0 1.2 0:00.00 reactor_1' 00:42:30.866 09:09:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168031 root 20 0 20.1t 151236 31424 S 0.0 1.2 0:00.00 reactor_1 00:42:30.866 09:09:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:30.866 09:09:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 168019 2 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168019 2 idle 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168019 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168019 -w 256 00:42:30.866 09:09:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168032 root 20 0 20.1t 151236 31424 S 0.0 1.2 0:00.00 reactor_2' 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168032 root 20 0 20.1t 151236 31424 S 0.0 1.2 0:00.00 reactor_2 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:42:31.148 09:09:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:42:31.405 [2024-07-12 09:09:06.433341] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:31.405 09:09:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:42:31.662 [2024-07-12 09:09:06.716970] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:42:31.662 [2024-07-12 09:09:06.717805] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:31.662 09:09:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:42:31.920 [2024-07-12 09:09:07.021008] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:42:31.920 [2024-07-12 09:09:07.022058] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 168019 0 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 168019 0 busy 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168019 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168019 -w 256 00:42:31.920 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168019 root 20 0 20.1t 151344 31424 R 99.9 1.2 0:01.44 reactor_0' 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168019 root 20 0 20.1t 151344 31424 R 99.9 1.2 0:01.44 reactor_0 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 168019 2 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 168019 2 busy 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168019 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168019 -w 256 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168032 root 20 0 20.1t 151344 31424 R 93.3 1.2 0:00.33 reactor_2' 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168032 root 20 0 20.1t 151344 31424 R 93.3 1.2 0:00.33 reactor_2 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:32.178 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.3 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:42:32.436 [2024-07-12 09:09:07.609061] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:42:32.436 [2024-07-12 09:09:07.610049] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 168019 2 00:42:32.436 09:09:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168019 2 idle 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168019 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168019 -w 256 00:42:32.437 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168032 root 20 0 20.1t 151412 31424 S 0.0 1.2 0:00.58 reactor_2' 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168032 root 20 0 20.1t 151412 31424 S 0.0 1.2 0:00.58 reactor_2 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:32.695 09:09:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:42:32.952 [2024-07-12 09:09:08.056919] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:42:32.952 [2024-07-12 09:09:08.057697] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:32.952 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:42:32.952 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:42:32.952 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:42:33.211 [2024-07-12 09:09:08.397387] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 168019 0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168019 0 idle 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168019 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168019 -w 256 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168019 root 20 0 20.1t 151500 31424 S 0.0 1.2 0:02.31 reactor_0' 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168019 root 20 0 20.1t 151500 31424 S 0.0 1.2 0:02.31 reactor_0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:42:33.470 09:09:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 168019 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 168019 ']' 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 168019 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168019 00:42:33.470 killing process with pid 168019 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168019' 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 168019 00:42:33.470 09:09:08 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 168019 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:42:35.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=168192 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:42:35.389 09:09:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 168192 /var/tmp/spdk.sock 00:42:35.389 09:09:10 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 168192 ']' 00:42:35.389 09:09:10 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:35.389 09:09:10 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:35.389 09:09:10 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:35.389 09:09:10 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:35.389 09:09:10 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:35.389 [2024-07-12 09:09:10.297522] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:35.389 [2024-07-12 09:09:10.298007] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168192 ] 00:42:35.389 [2024-07-12 09:09:10.482998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:35.665 [2024-07-12 09:09:10.755283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:35.665 [2024-07-12 09:09:10.755371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.665 [2024-07-12 09:09:10.755368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:35.923 [2024-07-12 09:09:11.105012] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:36.181 09:09:11 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:36.181 09:09:11 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:42:36.181 09:09:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:42:36.181 09:09:11 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:36.746 Malloc0 00:42:36.746 Malloc1 00:42:36.746 Malloc2 00:42:36.746 09:09:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:42:36.746 09:09:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:42:36.746 09:09:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:36.746 09:09:11 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:42:36.746 5000+0 records in 00:42:36.746 5000+0 records out 00:42:36.746 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0263536 s, 389 MB/s 00:42:36.747 09:09:11 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:42:37.004 AIO0 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 168192 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 168192 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=168192 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:42:37.004 09:09:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:42:37.263 09:09:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:42:37.522 spdk_thread ids are 1 on reactor0. 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 168192 0 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168192 0 idle 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168192 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168192 -w 256 00:42:37.522 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168192 root 20 0 20.1t 151560 31804 S 6.7 1.2 0:00.93 reactor_0' 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168192 root 20 0 20.1t 151560 31804 S 6.7 1.2 0:00.93 reactor_0 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 168192 1 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168192 1 idle 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168192 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168192 -w 256 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168195 root 20 0 20.1t 151560 31804 S 0.0 1.2 0:00.00 reactor_1' 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168195 root 20 0 20.1t 151560 31804 S 0.0 1.2 0:00.00 reactor_1 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 168192 2 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168192 2 idle 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168192 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168192 -w 256 00:42:37.781 09:09:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168196 root 20 0 20.1t 151560 31804 S 0.0 1.2 0:00.00 reactor_2' 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168196 root 20 0 20.1t 151560 31804 S 0.0 1.2 0:00.00 reactor_2 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:42:38.039 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:42:38.297 [2024-07-12 09:09:13.349046] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:42:38.297 [2024-07-12 09:09:13.349346] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:42:38.297 [2024-07-12 09:09:13.350435] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:38.297 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:42:38.556 [2024-07-12 09:09:13.576970] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:42:38.556 [2024-07-12 09:09:13.578042] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 168192 0 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 168192 0 busy 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168192 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168192 -w 256 00:42:38.556 09:09:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168192 root 20 0 20.1t 151676 31804 R 99.9 1.2 0:01.34 reactor_0' 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168192 root 20 0 20.1t 151676 31804 R 99.9 1.2 0:01.34 reactor_0 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:38.814 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 168192 2 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 168192 2 busy 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168192 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168192 -w 256 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168196 root 20 0 20.1t 151676 31804 R 99.9 1.2 0:00.34 reactor_2' 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168196 root 20 0 20.1t 151676 31804 R 99.9 1.2 0:00.34 reactor_2 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:38.815 09:09:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:42:39.073 [2024-07-12 09:09:14.201287] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:42:39.073 [2024-07-12 09:09:14.202066] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 168192 2 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168192 2 idle 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168192 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168192 -w 256 00:42:39.073 09:09:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168196 root 20 0 20.1t 151720 31804 S 0.0 1.2 0:00.62 reactor_2' 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168196 root 20 0 20.1t 151720 31804 S 0.0 1.2 0:00.62 reactor_2 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:39.332 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:42:39.591 [2024-07-12 09:09:14.641443] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:42:39.591 [2024-07-12 09:09:14.642797] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:42:39.591 [2024-07-12 09:09:14.642965] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 168192 0 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 168192 0 idle 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=168192 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 168192 -w 256 00:42:39.591 09:09:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 168192 root 20 0 20.1t 151748 31804 S 0.0 1.2 0:02.23 reactor_0' 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 168192 root 20 0 20.1t 151748 31804 S 0.0 1.2 0:02.23 reactor_0 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:42:39.850 09:09:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 168192 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 168192 ']' 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 168192 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168192 00:42:39.850 killing process with pid 168192 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168192' 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 168192 00:42:39.850 09:09:14 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 168192 00:42:41.226 09:09:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:42:41.226 09:09:16 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:42:41.226 ************************************ 00:42:41.226 END TEST reactor_set_interrupt 00:42:41.226 ************************************ 00:42:41.226 00:42:41.226 real 0m13.274s 00:42:41.226 user 0m14.109s 00:42:41.226 sys 0m1.685s 00:42:41.226 09:09:16 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:41.226 09:09:16 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:41.487 09:09:16 -- common/autotest_common.sh@1142 -- # return 0 00:42:41.487 09:09:16 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:42:41.487 09:09:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:41.487 09:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:41.487 09:09:16 -- common/autotest_common.sh@10 -- # set +x 00:42:41.487 ************************************ 00:42:41.487 START TEST reap_unregistered_poller 00:42:41.487 ************************************ 00:42:41.487 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:42:41.487 * Looking for test storage... 00:42:41.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.487 09:09:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:42:41.487 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:42:41.487 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.487 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.487 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:42:41.487 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:41.487 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:42:41.487 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:42:41.487 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:42:41.487 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:42:41.487 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:42:41.487 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:42:41.488 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:42:41.488 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:42:41.488 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_CET=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES=128 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_DPDK_UADK=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_ASAN=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_SHARED=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_VTUNE_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_RDMA_SET_TOS=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_VBDEV_COMPRESS=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VFIO_USER_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_PGO_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_FUZZER_LIB= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_HAVE_EXECINFO_H=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_USDT=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_URING_ZNS=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_FC_PATH= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_COVERAGE=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_CUSTOMOCF=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_DPDK_PKG_CONFIG=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_DEBUG=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_RDMA=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_HAVE_ARC4RANDOM=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_FUZZER=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_FC=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBARCHIVE=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_DPDK_COMPRESSDEV=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_CROSS_PREFIX= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_PREFIX=/usr/local 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_LIBBSD=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_UBSAN=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_PGO_CAPTURE=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_UBLK=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_ISAL_CRYPTO=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_CRYPTO=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_RBD=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_LIBDIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_IPSEC_MB_DIR= 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_PGO_USE=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_GOLANG=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_VHOST=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_IDXD=y 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_AVAHI=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:42:41.488 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:42:41.488 09:09:16 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:42:41.488 #define SPDK_CONFIG_H 00:42:41.488 #define SPDK_CONFIG_APPS 1 00:42:41.488 #define SPDK_CONFIG_ARCH native 00:42:41.488 #define SPDK_CONFIG_ASAN 1 00:42:41.488 #undef SPDK_CONFIG_AVAHI 00:42:41.488 #undef SPDK_CONFIG_CET 00:42:41.488 #define SPDK_CONFIG_COVERAGE 1 00:42:41.488 #define SPDK_CONFIG_CROSS_PREFIX 00:42:41.488 #undef SPDK_CONFIG_CRYPTO 00:42:41.488 #undef SPDK_CONFIG_CRYPTO_MLX5 00:42:41.488 #undef SPDK_CONFIG_CUSTOMOCF 00:42:41.488 #undef SPDK_CONFIG_DAOS 00:42:41.488 #define SPDK_CONFIG_DAOS_DIR 00:42:41.488 #define SPDK_CONFIG_DEBUG 1 00:42:41.488 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:42:41.488 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:42:41.488 #define SPDK_CONFIG_DPDK_INC_DIR 00:42:41.488 #define SPDK_CONFIG_DPDK_LIB_DIR 00:42:41.488 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:42:41.488 #undef SPDK_CONFIG_DPDK_UADK 00:42:41.488 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:42:41.488 #define SPDK_CONFIG_EXAMPLES 1 00:42:41.488 #undef SPDK_CONFIG_FC 00:42:41.488 #define SPDK_CONFIG_FC_PATH 00:42:41.488 #define SPDK_CONFIG_FIO_PLUGIN 1 00:42:41.488 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:42:41.488 #undef SPDK_CONFIG_FUSE 00:42:41.488 #undef SPDK_CONFIG_FUZZER 00:42:41.488 #define SPDK_CONFIG_FUZZER_LIB 00:42:41.488 #undef SPDK_CONFIG_GOLANG 00:42:41.488 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:42:41.488 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:42:41.488 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:42:41.488 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:42:41.488 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:42:41.488 #undef SPDK_CONFIG_HAVE_LIBBSD 00:42:41.488 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:42:41.488 #define SPDK_CONFIG_IDXD 1 00:42:41.488 #undef SPDK_CONFIG_IDXD_KERNEL 00:42:41.489 #undef SPDK_CONFIG_IPSEC_MB 00:42:41.489 #define SPDK_CONFIG_IPSEC_MB_DIR 00:42:41.489 #define SPDK_CONFIG_ISAL 1 00:42:41.489 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:42:41.489 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:42:41.489 #define SPDK_CONFIG_LIBDIR 00:42:41.489 #undef SPDK_CONFIG_LTO 00:42:41.489 #define SPDK_CONFIG_MAX_LCORES 128 00:42:41.489 #define SPDK_CONFIG_NVME_CUSE 1 00:42:41.489 #undef SPDK_CONFIG_OCF 00:42:41.489 #define SPDK_CONFIG_OCF_PATH 00:42:41.489 #define SPDK_CONFIG_OPENSSL_PATH 00:42:41.489 #undef SPDK_CONFIG_PGO_CAPTURE 00:42:41.489 #define SPDK_CONFIG_PGO_DIR 00:42:41.489 #undef SPDK_CONFIG_PGO_USE 00:42:41.489 #define SPDK_CONFIG_PREFIX /usr/local 00:42:41.489 #define SPDK_CONFIG_RAID5F 1 00:42:41.489 #undef SPDK_CONFIG_RBD 00:42:41.489 #define SPDK_CONFIG_RDMA 1 00:42:41.489 #define SPDK_CONFIG_RDMA_PROV verbs 00:42:41.489 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:42:41.489 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:42:41.489 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:42:41.489 #undef SPDK_CONFIG_SHARED 00:42:41.489 #undef SPDK_CONFIG_SMA 00:42:41.489 #define SPDK_CONFIG_TESTS 1 00:42:41.489 #undef SPDK_CONFIG_TSAN 00:42:41.489 #undef SPDK_CONFIG_UBLK 00:42:41.489 #define SPDK_CONFIG_UBSAN 1 00:42:41.489 #define SPDK_CONFIG_UNIT_TESTS 1 00:42:41.489 #undef SPDK_CONFIG_URING 00:42:41.489 #define SPDK_CONFIG_URING_PATH 00:42:41.489 #undef SPDK_CONFIG_URING_ZNS 00:42:41.489 #undef SPDK_CONFIG_USDT 00:42:41.489 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:42:41.489 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:42:41.489 #undef SPDK_CONFIG_VFIO_USER 00:42:41.489 #define SPDK_CONFIG_VFIO_USER_DIR 00:42:41.489 #define SPDK_CONFIG_VHOST 1 00:42:41.489 #define SPDK_CONFIG_VIRTIO 1 00:42:41.489 #undef SPDK_CONFIG_VTUNE 00:42:41.489 #define SPDK_CONFIG_VTUNE_DIR 00:42:41.489 #define SPDK_CONFIG_WERROR 1 00:42:41.489 #define SPDK_CONFIG_WPDK_DIR 00:42:41.489 #undef SPDK_CONFIG_XNVME 00:42:41.489 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:42:41.489 09:09:16 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:41.489 09:09:16 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:41.489 09:09:16 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:41.489 09:09:16 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:41.489 09:09:16 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:41.489 09:09:16 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:41.489 09:09:16 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:41.489 09:09:16 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:42:41.489 09:09:16 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:42:41.489 09:09:16 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:42:41.489 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 168370 ]] 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 168370 00:42:41.490 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.nrAFOK 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.nrAFOK/tests/interrupt /tmp/spdk.nrAFOK 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=udev 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6224461824 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6224461824 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1249763328 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254514688 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4751360 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=10311221248 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10288795648 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6267850752 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6272561152 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=103089152 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=109422592 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop2 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=41025536 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=41025536 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop1 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=96337920 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=96337920 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1254510592 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254510592 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_2/ubuntu2004-libvirt/output 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=93584392192 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6118387712 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop3 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=40763392 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=40763392 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop4 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:42:41.491 * Looking for test storage... 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=10311221248 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=12503388160 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:42:41.491 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=168413 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:41.492 09:09:16 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 168413 /var/tmp/spdk.sock 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 168413 ']' 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:41.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:41.492 09:09:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:42:41.750 [2024-07-12 09:09:16.720418] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:41.750 [2024-07-12 09:09:16.720832] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168413 ] 00:42:41.750 [2024-07-12 09:09:16.905758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:42.317 [2024-07-12 09:09:17.208314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:42.317 [2024-07-12 09:09:17.208491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.317 [2024-07-12 09:09:17.208496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:42.575 [2024-07-12 09:09:17.546754] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:42.575 09:09:17 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:42.575 09:09:17 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:42:42.575 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:42:42.575 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:42:42.575 09:09:17 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.575 09:09:17 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:42:42.833 09:09:17 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:42:42.833 "name": "app_thread", 00:42:42.833 "id": 1, 00:42:42.833 "active_pollers": [], 00:42:42.833 "timed_pollers": [ 00:42:42.833 { 00:42:42.833 "name": "rpc_subsystem_poll_servers", 00:42:42.833 "id": 1, 00:42:42.833 "state": "waiting", 00:42:42.833 "run_count": 0, 00:42:42.833 "busy_count": 0, 00:42:42.833 "period_ticks": 8800000 00:42:42.833 } 00:42:42.833 ], 00:42:42.833 "paused_pollers": [] 00:42:42.833 }' 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:42:42.833 5000+0 records in 00:42:42.833 5000+0 records out 00:42:42.833 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0238621 s, 429 MB/s 00:42:42.833 09:09:17 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:42:43.400 AIO0 00:42:43.400 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:42:43.659 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:43.659 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:42:43.659 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:42:43.659 "name": "app_thread", 00:42:43.659 "id": 1, 00:42:43.659 "active_pollers": [], 00:42:43.659 "timed_pollers": [ 00:42:43.659 { 00:42:43.659 "name": "rpc_subsystem_poll_servers", 00:42:43.659 "id": 1, 00:42:43.659 "state": "waiting", 00:42:43.659 "run_count": 0, 00:42:43.659 "busy_count": 0, 00:42:43.659 "period_ticks": 8800000 00:42:43.659 } 00:42:43.659 ], 00:42:43.659 "paused_pollers": [] 00:42:43.659 }' 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:42:43.659 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:42:43.917 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:42:43.917 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:42:43.917 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:42:43.917 09:09:18 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 168413 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 168413 ']' 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 168413 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168413 00:42:43.917 killing process with pid 168413 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168413' 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 168413 00:42:43.917 09:09:18 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 168413 00:42:45.289 09:09:20 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:42:45.289 09:09:20 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:42:45.289 ************************************ 00:42:45.289 END TEST reap_unregistered_poller 00:42:45.289 ************************************ 00:42:45.289 00:42:45.289 real 0m3.756s 00:42:45.289 user 0m3.282s 00:42:45.289 sys 0m0.604s 00:42:45.289 09:09:20 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:45.289 09:09:20 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:42:45.289 09:09:20 -- common/autotest_common.sh@1142 -- # return 0 00:42:45.289 09:09:20 -- spdk/autotest.sh@198 -- # uname -s 00:42:45.289 09:09:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:42:45.289 09:09:20 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:42:45.289 09:09:20 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:42:45.289 09:09:20 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:42:45.289 09:09:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:45.289 09:09:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:45.289 09:09:20 -- common/autotest_common.sh@10 -- # set +x 00:42:45.289 ************************************ 00:42:45.289 START TEST spdk_dd 00:42:45.289 ************************************ 00:42:45.289 09:09:20 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:42:45.289 * Looking for test storage... 00:42:45.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:45.289 09:09:20 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:45.289 09:09:20 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:45.289 09:09:20 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:45.289 09:09:20 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:45.289 09:09:20 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:45.289 09:09:20 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:45.289 09:09:20 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:45.289 09:09:20 spdk_dd -- paths/export.sh@5 -- # export PATH 00:42:45.289 09:09:20 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:45.289 09:09:20 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:45.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:45.546 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:46.478 09:09:21 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:42:46.478 09:09:21 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@230 -- # local class 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@232 -- # local progif 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@233 -- # class=01 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@15 -- # local i 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@24 -- # return 0 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:42:46.478 09:09:21 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:42:46.479 09:09:21 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:42:46.479 09:09:21 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@139 -- # local lib so 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:42:46.479 09:09:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:42:46.479 09:09:21 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:42:46.479 09:09:21 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:42:46.479 09:09:21 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:46.479 09:09:21 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:46.479 09:09:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:46.479 ************************************ 00:42:46.479 START TEST spdk_dd_basic_rw 00:42:46.479 ************************************ 00:42:46.479 09:09:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:42:46.737 * Looking for test storage... 00:42:46.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:42:46.737 09:09:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2378 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2378 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:42:46.997 09:09:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:42:46.997 ************************************ 00:42:46.997 START TEST dd_bs_lt_native_bs 00:42:46.997 ************************************ 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:46.998 09:09:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:42:46.998 { 00:42:46.998 "subsystems": [ 00:42:46.998 { 00:42:46.998 "subsystem": "bdev", 00:42:46.998 "config": [ 00:42:46.998 { 00:42:46.998 "params": { 00:42:46.998 "trtype": "pcie", 00:42:46.998 "traddr": "0000:00:10.0", 00:42:46.998 "name": "Nvme0" 00:42:46.998 }, 00:42:46.998 "method": "bdev_nvme_attach_controller" 00:42:46.998 }, 00:42:46.998 { 00:42:46.998 "method": "bdev_wait_for_examine" 00:42:46.998 } 00:42:46.998 ] 00:42:46.998 } 00:42:46.998 ] 00:42:46.998 } 00:42:46.998 [2024-07-12 09:09:22.120036] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:46.998 [2024-07-12 09:09:22.120508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168748 ] 00:42:47.255 [2024-07-12 09:09:22.283249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.511 [2024-07-12 09:09:22.500594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.767 [2024-07-12 09:09:22.883345] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:42:47.767 [2024-07-12 09:09:22.883674] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:48.696 [2024-07-12 09:09:23.609418] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:48.954 00:42:48.954 real 0m1.988s 00:42:48.954 user 0m1.670s 00:42:48.954 sys 0m0.270s 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:48.954 ************************************ 00:42:48.954 END TEST dd_bs_lt_native_bs 00:42:48.954 ************************************ 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:42:48.954 09:09:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:42:48.955 ************************************ 00:42:48.955 START TEST dd_rw 00:42:48.955 ************************************ 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:42:48.955 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:42:49.887 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:42:49.887 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:42:49.887 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:42:49.887 09:09:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:42:49.887 { 00:42:49.887 "subsystems": [ 00:42:49.887 { 00:42:49.887 "subsystem": "bdev", 00:42:49.887 "config": [ 00:42:49.887 { 00:42:49.887 "params": { 00:42:49.887 "trtype": "pcie", 00:42:49.887 "traddr": "0000:00:10.0", 00:42:49.887 "name": "Nvme0" 00:42:49.887 }, 00:42:49.887 "method": "bdev_nvme_attach_controller" 00:42:49.887 }, 00:42:49.887 { 00:42:49.887 "method": "bdev_wait_for_examine" 00:42:49.887 } 00:42:49.887 ] 00:42:49.887 } 00:42:49.887 ] 00:42:49.887 } 00:42:49.887 [2024-07-12 09:09:24.817682] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:49.887 [2024-07-12 09:09:24.818141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168800 ] 00:42:49.887 [2024-07-12 09:09:24.985343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.145 [2024-07-12 09:09:25.238616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:51.647  Copying: 60/60 [kB] (average 29 MBps) 00:42:51.647 00:42:51.647 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:42:51.647 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:42:51.647 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:42:51.647 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:42:51.647 { 00:42:51.647 "subsystems": [ 00:42:51.647 { 00:42:51.647 "subsystem": "bdev", 00:42:51.647 "config": [ 00:42:51.647 { 00:42:51.647 "params": { 00:42:51.647 "trtype": "pcie", 00:42:51.647 "traddr": "0000:00:10.0", 00:42:51.647 "name": "Nvme0" 00:42:51.647 }, 00:42:51.647 "method": "bdev_nvme_attach_controller" 00:42:51.647 }, 00:42:51.647 { 00:42:51.647 "method": "bdev_wait_for_examine" 00:42:51.647 } 00:42:51.647 ] 00:42:51.647 } 00:42:51.647 ] 00:42:51.647 } 00:42:51.647 [2024-07-12 09:09:26.809458] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:51.647 [2024-07-12 09:09:26.809949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168830 ] 00:42:51.906 [2024-07-12 09:09:26.984719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:52.163 [2024-07-12 09:09:27.227883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.801  Copying: 60/60 [kB] (average 29 MBps) 00:42:53.801 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:42:53.801 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:42:53.801 { 00:42:53.801 "subsystems": [ 00:42:53.801 { 00:42:53.801 "subsystem": "bdev", 00:42:53.801 "config": [ 00:42:53.801 { 00:42:53.801 "params": { 00:42:53.801 "trtype": "pcie", 00:42:53.801 "traddr": "0000:00:10.0", 00:42:53.801 "name": "Nvme0" 00:42:53.801 }, 00:42:53.802 "method": "bdev_nvme_attach_controller" 00:42:53.802 }, 00:42:53.802 { 00:42:53.802 "method": "bdev_wait_for_examine" 00:42:53.802 } 00:42:53.802 ] 00:42:53.802 } 00:42:53.802 ] 00:42:53.802 } 00:42:53.802 [2024-07-12 09:09:28.888519] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:53.802 [2024-07-12 09:09:28.888863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168875 ] 00:42:54.061 [2024-07-12 09:09:29.059792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:54.320 [2024-07-12 09:09:29.279918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.957  Copying: 1024/1024 [kB] (average 1000 MBps) 00:42:55.957 00:42:55.957 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:42:55.957 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:42:55.957 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:42:55.957 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:42:55.957 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:42:55.957 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:42:55.957 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:42:56.524 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:42:56.524 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:42:56.524 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:42:56.524 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:42:56.524 { 00:42:56.524 "subsystems": [ 00:42:56.524 { 00:42:56.524 "subsystem": "bdev", 00:42:56.524 "config": [ 00:42:56.524 { 00:42:56.524 "params": { 00:42:56.524 "trtype": "pcie", 00:42:56.524 "traddr": "0000:00:10.0", 00:42:56.524 "name": "Nvme0" 00:42:56.524 }, 00:42:56.524 "method": "bdev_nvme_attach_controller" 00:42:56.524 }, 00:42:56.524 { 00:42:56.524 "method": "bdev_wait_for_examine" 00:42:56.524 } 00:42:56.524 ] 00:42:56.524 } 00:42:56.524 ] 00:42:56.524 } 00:42:56.524 [2024-07-12 09:09:31.493336] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:56.524 [2024-07-12 09:09:31.493815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168913 ] 00:42:56.524 [2024-07-12 09:09:31.670567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:56.782 [2024-07-12 09:09:31.936627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:58.721  Copying: 60/60 [kB] (average 58 MBps) 00:42:58.721 00:42:58.721 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:42:58.721 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:42:58.721 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:42:58.721 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:42:58.721 [2024-07-12 09:09:33.586362] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:42:58.721 [2024-07-12 09:09:33.586787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168941 ] 00:42:58.721 { 00:42:58.721 "subsystems": [ 00:42:58.721 { 00:42:58.721 "subsystem": "bdev", 00:42:58.721 "config": [ 00:42:58.721 { 00:42:58.721 "params": { 00:42:58.721 "trtype": "pcie", 00:42:58.721 "traddr": "0000:00:10.0", 00:42:58.721 "name": "Nvme0" 00:42:58.721 }, 00:42:58.721 "method": "bdev_nvme_attach_controller" 00:42:58.721 }, 00:42:58.721 { 00:42:58.721 "method": "bdev_wait_for_examine" 00:42:58.721 } 00:42:58.721 ] 00:42:58.721 } 00:42:58.721 ] 00:42:58.721 } 00:42:58.721 [2024-07-12 09:09:33.745328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:58.979 [2024-07-12 09:09:34.043504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:00.482  Copying: 60/60 [kB] (average 58 MBps) 00:43:00.482 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:00.482 09:09:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:00.482 { 00:43:00.482 "subsystems": [ 00:43:00.482 { 00:43:00.482 "subsystem": "bdev", 00:43:00.482 "config": [ 00:43:00.482 { 00:43:00.482 "params": { 00:43:00.482 "trtype": "pcie", 00:43:00.482 "traddr": "0000:00:10.0", 00:43:00.482 "name": "Nvme0" 00:43:00.482 }, 00:43:00.482 "method": "bdev_nvme_attach_controller" 00:43:00.482 }, 00:43:00.482 { 00:43:00.482 "method": "bdev_wait_for_examine" 00:43:00.482 } 00:43:00.482 ] 00:43:00.482 } 00:43:00.482 ] 00:43:00.482 } 00:43:00.482 [2024-07-12 09:09:35.599722] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:00.482 [2024-07-12 09:09:35.600252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168969 ] 00:43:00.740 [2024-07-12 09:09:35.764382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:00.999 [2024-07-12 09:09:35.978454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.648  Copying: 1024/1024 [kB] (average 1000 MBps) 00:43:02.648 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:02.648 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:03.242 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:43:03.242 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:03.242 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:03.242 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:03.242 [2024-07-12 09:09:38.316214] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:03.242 [2024-07-12 09:09:38.316766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169008 ] 00:43:03.242 { 00:43:03.242 "subsystems": [ 00:43:03.242 { 00:43:03.242 "subsystem": "bdev", 00:43:03.242 "config": [ 00:43:03.242 { 00:43:03.242 "params": { 00:43:03.242 "trtype": "pcie", 00:43:03.242 "traddr": "0000:00:10.0", 00:43:03.242 "name": "Nvme0" 00:43:03.242 }, 00:43:03.242 "method": "bdev_nvme_attach_controller" 00:43:03.242 }, 00:43:03.242 { 00:43:03.242 "method": "bdev_wait_for_examine" 00:43:03.242 } 00:43:03.242 ] 00:43:03.242 } 00:43:03.242 ] 00:43:03.242 } 00:43:03.500 [2024-07-12 09:09:38.486341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.759 [2024-07-12 09:09:38.733024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.391  Copying: 56/56 [kB] (average 27 MBps) 00:43:05.391 00:43:05.391 09:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:43:05.391 09:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:05.391 09:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:05.391 09:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:05.391 { 00:43:05.391 "subsystems": [ 00:43:05.391 { 00:43:05.391 "subsystem": "bdev", 00:43:05.391 "config": [ 00:43:05.391 { 00:43:05.391 "params": { 00:43:05.391 "trtype": "pcie", 00:43:05.391 "traddr": "0000:00:10.0", 00:43:05.391 "name": "Nvme0" 00:43:05.391 }, 00:43:05.391 "method": "bdev_nvme_attach_controller" 00:43:05.391 }, 00:43:05.391 { 00:43:05.391 "method": "bdev_wait_for_examine" 00:43:05.391 } 00:43:05.392 ] 00:43:05.392 } 00:43:05.392 ] 00:43:05.392 } 00:43:05.392 [2024-07-12 09:09:40.282781] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:05.392 [2024-07-12 09:09:40.283262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169055 ] 00:43:05.392 [2024-07-12 09:09:40.458761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.649 [2024-07-12 09:09:40.710362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.281  Copying: 56/56 [kB] (average 27 MBps) 00:43:07.281 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:07.281 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:07.281 { 00:43:07.281 "subsystems": [ 00:43:07.281 { 00:43:07.281 "subsystem": "bdev", 00:43:07.281 "config": [ 00:43:07.281 { 00:43:07.281 "params": { 00:43:07.281 "trtype": "pcie", 00:43:07.281 "traddr": "0000:00:10.0", 00:43:07.281 "name": "Nvme0" 00:43:07.281 }, 00:43:07.281 "method": "bdev_nvme_attach_controller" 00:43:07.281 }, 00:43:07.281 { 00:43:07.281 "method": "bdev_wait_for_examine" 00:43:07.281 } 00:43:07.281 ] 00:43:07.281 } 00:43:07.281 ] 00:43:07.281 } 00:43:07.281 [2024-07-12 09:09:42.333490] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:07.281 [2024-07-12 09:09:42.333882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169087 ] 00:43:07.539 [2024-07-12 09:09:42.503592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.539 [2024-07-12 09:09:42.721904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:09.040  Copying: 1024/1024 [kB] (average 1000 MBps) 00:43:09.040 00:43:09.040 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:09.040 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:43:09.040 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:43:09.040 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:43:09.040 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:43:09.040 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:09.040 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:09.974 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:43:09.974 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:09.974 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:09.974 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:09.974 { 00:43:09.974 "subsystems": [ 00:43:09.974 { 00:43:09.974 "subsystem": "bdev", 00:43:09.974 "config": [ 00:43:09.974 { 00:43:09.974 "params": { 00:43:09.974 "trtype": "pcie", 00:43:09.974 "traddr": "0000:00:10.0", 00:43:09.974 "name": "Nvme0" 00:43:09.974 }, 00:43:09.974 "method": "bdev_nvme_attach_controller" 00:43:09.974 }, 00:43:09.974 { 00:43:09.974 "method": "bdev_wait_for_examine" 00:43:09.974 } 00:43:09.974 ] 00:43:09.974 } 00:43:09.974 ] 00:43:09.974 } 00:43:09.974 [2024-07-12 09:09:44.898050] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:09.974 [2024-07-12 09:09:44.898651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169115 ] 00:43:09.974 [2024-07-12 09:09:45.080555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:10.232 [2024-07-12 09:09:45.315096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:11.735  Copying: 56/56 [kB] (average 54 MBps) 00:43:11.735 00:43:11.735 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:43:11.735 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:11.735 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:11.735 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:11.735 { 00:43:11.735 "subsystems": [ 00:43:11.735 { 00:43:11.735 "subsystem": "bdev", 00:43:11.735 "config": [ 00:43:11.735 { 00:43:11.735 "params": { 00:43:11.735 "trtype": "pcie", 00:43:11.735 "traddr": "0000:00:10.0", 00:43:11.735 "name": "Nvme0" 00:43:11.735 }, 00:43:11.735 "method": "bdev_nvme_attach_controller" 00:43:11.735 }, 00:43:11.735 { 00:43:11.735 "method": "bdev_wait_for_examine" 00:43:11.735 } 00:43:11.735 ] 00:43:11.735 } 00:43:11.735 ] 00:43:11.735 } 00:43:11.994 [2024-07-12 09:09:46.954238] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:11.994 [2024-07-12 09:09:46.954788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169142 ] 00:43:11.994 [2024-07-12 09:09:47.131277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:12.251 [2024-07-12 09:09:47.351755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:13.751  Copying: 56/56 [kB] (average 54 MBps) 00:43:13.751 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:13.751 09:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:13.751 { 00:43:13.751 "subsystems": [ 00:43:13.751 { 00:43:13.751 "subsystem": "bdev", 00:43:13.751 "config": [ 00:43:13.751 { 00:43:13.751 "params": { 00:43:13.751 "trtype": "pcie", 00:43:13.751 "traddr": "0000:00:10.0", 00:43:13.751 "name": "Nvme0" 00:43:13.751 }, 00:43:13.751 "method": "bdev_nvme_attach_controller" 00:43:13.751 }, 00:43:13.751 { 00:43:13.751 "method": "bdev_wait_for_examine" 00:43:13.751 } 00:43:13.751 ] 00:43:13.751 } 00:43:13.751 ] 00:43:13.751 } 00:43:13.751 [2024-07-12 09:09:48.915717] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:13.751 [2024-07-12 09:09:48.916519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169174 ] 00:43:14.009 [2024-07-12 09:09:49.096534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:14.266 [2024-07-12 09:09:49.336872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.765  Copying: 1024/1024 [kB] (average 500 MBps) 00:43:15.765 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:15.766 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:16.331 09:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:43:16.331 09:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:16.331 09:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:16.331 09:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:16.588 [2024-07-12 09:09:51.553373] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:16.588 [2024-07-12 09:09:51.553791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169229 ] 00:43:16.588 { 00:43:16.588 "subsystems": [ 00:43:16.588 { 00:43:16.588 "subsystem": "bdev", 00:43:16.588 "config": [ 00:43:16.588 { 00:43:16.588 "params": { 00:43:16.588 "trtype": "pcie", 00:43:16.588 "traddr": "0000:00:10.0", 00:43:16.588 "name": "Nvme0" 00:43:16.588 }, 00:43:16.588 "method": "bdev_nvme_attach_controller" 00:43:16.588 }, 00:43:16.588 { 00:43:16.588 "method": "bdev_wait_for_examine" 00:43:16.588 } 00:43:16.588 ] 00:43:16.588 } 00:43:16.588 ] 00:43:16.588 } 00:43:16.588 [2024-07-12 09:09:51.720243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:16.846 [2024-07-12 09:09:51.958894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.341  Copying: 48/48 [kB] (average 46 MBps) 00:43:18.341 00:43:18.341 09:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:43:18.341 09:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:18.341 09:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:18.341 09:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:18.341 { 00:43:18.341 "subsystems": [ 00:43:18.341 { 00:43:18.341 "subsystem": "bdev", 00:43:18.341 "config": [ 00:43:18.341 { 00:43:18.341 "params": { 00:43:18.341 "trtype": "pcie", 00:43:18.341 "traddr": "0000:00:10.0", 00:43:18.341 "name": "Nvme0" 00:43:18.341 }, 00:43:18.341 "method": "bdev_nvme_attach_controller" 00:43:18.341 }, 00:43:18.341 { 00:43:18.341 "method": "bdev_wait_for_examine" 00:43:18.341 } 00:43:18.341 ] 00:43:18.341 } 00:43:18.341 ] 00:43:18.341 } 00:43:18.341 [2024-07-12 09:09:53.483475] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:18.341 [2024-07-12 09:09:53.483921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169256 ] 00:43:18.597 [2024-07-12 09:09:53.654175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.854 [2024-07-12 09:09:53.871634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:20.486  Copying: 48/48 [kB] (average 46 MBps) 00:43:20.486 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:20.486 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:20.486 { 00:43:20.486 "subsystems": [ 00:43:20.486 { 00:43:20.486 "subsystem": "bdev", 00:43:20.486 "config": [ 00:43:20.486 { 00:43:20.486 "params": { 00:43:20.486 "trtype": "pcie", 00:43:20.486 "traddr": "0000:00:10.0", 00:43:20.486 "name": "Nvme0" 00:43:20.486 }, 00:43:20.486 "method": "bdev_nvme_attach_controller" 00:43:20.486 }, 00:43:20.486 { 00:43:20.486 "method": "bdev_wait_for_examine" 00:43:20.486 } 00:43:20.486 ] 00:43:20.486 } 00:43:20.486 ] 00:43:20.486 } 00:43:20.486 [2024-07-12 09:09:55.518965] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:20.486 [2024-07-12 09:09:55.519540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169288 ] 00:43:20.745 [2024-07-12 09:09:55.689408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:20.745 [2024-07-12 09:09:55.935578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.263  Copying: 1024/1024 [kB] (average 500 MBps) 00:43:22.263 00:43:22.263 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:43:22.263 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:43:22.263 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:43:22.263 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:43:22.263 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:43:22.263 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:43:22.263 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:22.837 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:43:22.837 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:43:22.837 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:22.837 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:23.095 { 00:43:23.095 "subsystems": [ 00:43:23.095 { 00:43:23.095 "subsystem": "bdev", 00:43:23.095 "config": [ 00:43:23.095 { 00:43:23.095 "params": { 00:43:23.095 "trtype": "pcie", 00:43:23.095 "traddr": "0000:00:10.0", 00:43:23.095 "name": "Nvme0" 00:43:23.095 }, 00:43:23.095 "method": "bdev_nvme_attach_controller" 00:43:23.095 }, 00:43:23.095 { 00:43:23.095 "method": "bdev_wait_for_examine" 00:43:23.095 } 00:43:23.095 ] 00:43:23.095 } 00:43:23.095 ] 00:43:23.095 } 00:43:23.095 [2024-07-12 09:09:58.065175] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:23.095 [2024-07-12 09:09:58.065543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169316 ] 00:43:23.095 [2024-07-12 09:09:58.237997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.353 [2024-07-12 09:09:58.493782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:25.318  Copying: 48/48 [kB] (average 46 MBps) 00:43:25.318 00:43:25.318 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:43:25.318 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:43:25.318 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:25.318 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:25.318 { 00:43:25.318 "subsystems": [ 00:43:25.318 { 00:43:25.318 "subsystem": "bdev", 00:43:25.318 "config": [ 00:43:25.318 { 00:43:25.318 "params": { 00:43:25.318 "trtype": "pcie", 00:43:25.318 "traddr": "0000:00:10.0", 00:43:25.318 "name": "Nvme0" 00:43:25.318 }, 00:43:25.318 "method": "bdev_nvme_attach_controller" 00:43:25.318 }, 00:43:25.318 { 00:43:25.318 "method": "bdev_wait_for_examine" 00:43:25.318 } 00:43:25.318 ] 00:43:25.319 } 00:43:25.319 ] 00:43:25.319 } 00:43:25.319 [2024-07-12 09:10:00.221304] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:25.319 [2024-07-12 09:10:00.221986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169364 ] 00:43:25.319 [2024-07-12 09:10:00.390902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:25.576 [2024-07-12 09:10:00.605830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:27.326  Copying: 48/48 [kB] (average 46 MBps) 00:43:27.326 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:27.326 09:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:27.326 [2024-07-12 09:10:02.154224] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:27.326 [2024-07-12 09:10:02.154609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169392 ] 00:43:27.326 { 00:43:27.326 "subsystems": [ 00:43:27.326 { 00:43:27.326 "subsystem": "bdev", 00:43:27.326 "config": [ 00:43:27.326 { 00:43:27.326 "params": { 00:43:27.326 "trtype": "pcie", 00:43:27.326 "traddr": "0000:00:10.0", 00:43:27.326 "name": "Nvme0" 00:43:27.326 }, 00:43:27.326 "method": "bdev_nvme_attach_controller" 00:43:27.326 }, 00:43:27.326 { 00:43:27.326 "method": "bdev_wait_for_examine" 00:43:27.326 } 00:43:27.326 ] 00:43:27.326 } 00:43:27.326 ] 00:43:27.326 } 00:43:27.326 [2024-07-12 09:10:02.313357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:27.584 [2024-07-12 09:10:02.532048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:29.212  Copying: 1024/1024 [kB] (average 1000 MBps) 00:43:29.212 00:43:29.212 ************************************ 00:43:29.212 END TEST dd_rw 00:43:29.212 ************************************ 00:43:29.212 00:43:29.212 real 0m40.028s 00:43:29.212 user 0m33.937s 00:43:29.212 sys 0m4.678s 00:43:29.212 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:43:29.213 ************************************ 00:43:29.213 START TEST dd_rw_offset 00:43:29.213 ************************************ 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=647c9eigvc3szchmtumng0o1vxzayz34g6ic40l40gefjowafknl5l4b3vmxlarxch2c6irucjn6tu5s0foeln9onwh2qemmqb3ppm8t4f32oadbnykp9gfb3viqoeq915zrtedoj3eryxz1sknjevhqasklcskutwbu9jq2lovuf57hpi0evg6us9guryjwmrsgo4hysdvon0dzbv59crme85gs74jd66ac6dvwzkixthr97ihlbh7s91ccdnumfpo3wimcqg1zoe4ih0hme42zb4nunmhgmuapk3u7gu1gcmrxvf33yki7g6xgj59shwng5kbxs0409dg20zmm3bjtpstw8hlra0rxlhjvudghxop3b0kt905ix6lxmtioeq9bj5b6glh3eopn0ln5wefngicqu5pnuvlh7ev6iaw6rc44tq6600640waynarrfyinnfpp98pjq2cjqcvmcxvfk3smnt58dplme3e56e4g97hn5ss0dnf3dixz0hegz34rdjso403nkzpn0ye03wrqm79wlp2qa1bv9z1gdrozldbw9s18ebi2t7u1xaf1u95yh4jzg5ueg0l37i2131q9cvc5ids6n6tbqg5mogq1pm2vpf1hy8avjc1izw9elqwjyhgjm7c60igfu3yq3ewe60rxc1ahjorui7t35wz7j1locd7jqhnlhxpm4t8d9cqlagezdjjbqspbwqmc3viiaskwa0kidxmxo314u8oqtzaekxuh3ognpml0zafq7jdmd5xinz7gv7z7d04m43nzpoeyndzmg2v1n5v8t4r2qdr695s6a2lsjudamkz6o2zyzcklei0cy6zlc7dyyh1mwqhw59dhe0bck8koy0k6sncvf34eob05v1lrvjlzf0vfkzuti3vaix2lxnxhg96hzl7sqloha1ldpzo0b5bymm2otetcylpq3qjrdhxlwcoanevythq0z7d1o6zslwjy870jxqs5zkvk6dh9srn17w3a2woo709ecbxspefwxweugnc6jqesnscpl20dss1xz5y86tvhihsa2h38jihxorb1a2pzafdomgq74myedcxlegammqj67sg6mc5h8anz52t4ff94h9g8limc45z7ir3ijjfdjj0bs9rq6sx2k24hli4p4s3rm2hgy5oxvocceedsveikchozcqzr53d1nrfh4kxckx6o61lrpqud72dg54p0sspm7a3rqeytdf8u5t0340ko051awp63tbhyvdw5gpw85joo0hv4se8e7dfn9m7ektxnpb6xs0szzftdz4ssa6b3w0jwf66t4bljpue7wl2k847nhoumttuw24pjxd3z9wiwzabwjzyomxvp6yvzcvgwn42lbh11xgrkojdx1fsv80q2ey6ln842itbnm0wzyfkn3ti7b5rz081dhfrqfj3a8772uvgnqohivczd109rqnrtmsrb1awzgelyktv6dxqc9ryamv1arb646bwjag00sbj1ab8dttshzls5gwjlg9q60heb6ed2szu90z8tjejo54jk88b29dbeps0f3tf8prl7jm0nie9quf8vee1lzxw1szx0h5v9kcrvo2xly1ssdp4s5cq09hcvwxph5ykt1gd7yup8kciwnv8igcqt2rdl63cylq8kkmfh1hrhf3pqcyy3gaamhvd2t2ytzev4vzitaat7j25d7qx2o8bnj30r9ylaf2spg9jegpz2bc9dd5e4ctgeyupx9k59hl10ulnplqv65n8zweaa0k4x5wflkigy1ipbd387sq0i2ordmodpvlhqlaxvkg0tlqdkt1qks6gnybd8caah64wuuhb6lbd9gvy8b082eipt6n3gg0f26o7btdtzaspsczvbxlow2fc39cl4mhc3vwayadhgpbx6ldpl1hjv6pezj8aoru9nv6tdzfbp2u8b8zwezpzltx0ap6dkv32u4mvw94lmk65q0xzobnlw4ijpyrc73im7ao8eraqf0qlibndkncgj8jdqy4y74wmos0bcklsr7xvwv0e0ao69ech40m5mlweqeexaf3jkxuyv8rg2pfot5c665aup5adnujrrdvft2papkpfdeag9jf9swp3p37su6iawele90dwkxiiy3ktp7eocvkmnc67b8f6cuziuag8rmgoys8aac58weqicwh28dkvogrz99460gmq0scospcy6xzb9grzf8e9xrk238ujq7feqgcxav2pds0bsnrn5c1ocuwwjytgpjd5bbydctyub2ph93gh5svjit3m0uvx5y0mxz2s2ktht54cnviq64couwrojs5afsccdxifboptb4pnf34w78aouyzi6ueya46p86v6sg6zvy16z0xqcjfub6s86lvwu7bh8xlm4ocbn851lctpl7t00cgzpjpf19n3j38j6zowewfkj1mdx746e0u1euod2okljj2wdvc6ordf9ljsnmc184quq94n1lev5eaz0bkwv6gbgwm6hg1arnwurzso4c206ynfcg5p3psw2r56bf742qf37p9vko775p80f91gfye2r2worw1qmfw86xoultgzq6skgq9o7soqdw6hf4vn31kyo1e89yg0q4x53psdnhs6rg70k12qjgr507cwqlmcdoymmq4cr9r93nvi6w47cwm5w1wboifxz8ilr37818g2yv57flhtg7d7kj3fyfv2s129sazhtswea517kmqp7tyf51z5njsm38b5tiy4esi1a3t1ax4ol3bdmatdxa58q1lql4u3wewwmzuza2zilxnof4zo6m73lsg3hqn8ev1n7fg16fxg6aiirinn1sst9au3ftetxlzwbksj2syxbifv2m4w65zh1th9tbe6khxqqmfk3kv5z581cu3vsrl50f1ujjdhpgd9f8euer374yqgr19s9gfpbazkep3fgzonlkipzig6qps4ecgnxo4y5uvd17mluoylqchyb78qgbyc11703kbfnob80ofxp9t1gkpfb9wc0vg3m1pet4jt8sdl8xf71fvwo437n9awqo7wyl3oqyye2cv6oxor1gt5dthed0t4v35q7peeymxjiy89tcbg8m689bjl01206thhjjfssljpib76lcxvgcmc6cu5a5dmubcfp7nq0w3rd347kesy2vnqaz81dlco5ueshug9v104rp3z4mraitee7fmn4dk1g142h4jl2wgcw737ug21i5k0wseajji33nyce34ztpra37oynrsra4ajbibhzhzduw16vcijurxmyc0v3u9klrac5hqcj2wc9yqng9xp1iyx0ax6a9yldmoce007mxiiw4pa7o9i9pu92h53gak87f0gpng41er6wkqg8xg3q5amqi8h44hp2tc01fq7eqk25lo486pyiljpz6lhpxdzhb034inndxgz30axpj0umhmwvicomf3vvofxqmsuuht3i902vussgzk6efvp4i9lysjk62hj02ucal7b289kecflej4xyi53axlir85zqeouqqjohdyi914y4w2w7zd7rm62aqsqlqszw03225yub8vh79cgop2gvv5toda3vi6pd0c7obtclvzvan9ez1o59zxvgsp89m7iplimhj5trjwsn6k9xm064ahivl9al1n3zduxxwg2x451ql8lxoupcaryobo4stwxt1u8ykaisst6pdc7jscxmaajn2nfdo2b3d8z6qw4wtpjfxhh4bqvj2u7frdd3m1kcyh45a2f35c06lyng1fmextwdrab3jf60rvpxx5dl9xcr3nxrrixo3f8o74yndpq12sj9cqq9ba1ahzl5kxasibnty9po5opmt03wzw9vq2atyptt9l3r5x3xq2yx2b8g4qh577bbfg50uor9h9xfkggumag1u27u7nflhcspdz0i8q66mp1o59g3ebp0nbsa607jkcemnu10eb6kyxvudz277z4mx5wq52cxfdr3x19sj0j7ohebht4exktmvkp6xcux4hu862a6jwjnqtbdtxdsy6m6mipbwwh26f6m9r63ywms9o08pxlv54abhhncnqisod7jwalh 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:43:29.213 09:10:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:43:29.213 { 00:43:29.213 "subsystems": [ 00:43:29.213 { 00:43:29.213 "subsystem": "bdev", 00:43:29.213 "config": [ 00:43:29.213 { 00:43:29.213 "params": { 00:43:29.213 "trtype": "pcie", 00:43:29.213 "traddr": "0000:00:10.0", 00:43:29.213 "name": "Nvme0" 00:43:29.213 }, 00:43:29.213 "method": "bdev_nvme_attach_controller" 00:43:29.213 }, 00:43:29.213 { 00:43:29.213 "method": "bdev_wait_for_examine" 00:43:29.213 } 00:43:29.213 ] 00:43:29.213 } 00:43:29.213 ] 00:43:29.213 } 00:43:29.213 [2024-07-12 09:10:04.290751] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:29.213 [2024-07-12 09:10:04.291163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169440 ] 00:43:29.471 [2024-07-12 09:10:04.465694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:29.729 [2024-07-12 09:10:04.721448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.360  Copying: 4096/4096 [B] (average 4000 kBps) 00:43:31.360 00:43:31.360 09:10:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:43:31.360 09:10:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:43:31.360 09:10:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:43:31.360 09:10:06 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:43:31.360 { 00:43:31.360 "subsystems": [ 00:43:31.360 { 00:43:31.360 "subsystem": "bdev", 00:43:31.360 "config": [ 00:43:31.360 { 00:43:31.360 "params": { 00:43:31.360 "trtype": "pcie", 00:43:31.360 "traddr": "0000:00:10.0", 00:43:31.360 "name": "Nvme0" 00:43:31.360 }, 00:43:31.360 "method": "bdev_nvme_attach_controller" 00:43:31.360 }, 00:43:31.360 { 00:43:31.360 "method": "bdev_wait_for_examine" 00:43:31.360 } 00:43:31.360 ] 00:43:31.360 } 00:43:31.360 ] 00:43:31.360 } 00:43:31.360 [2024-07-12 09:10:06.255820] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:31.360 [2024-07-12 09:10:06.257249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169471 ] 00:43:31.360 [2024-07-12 09:10:06.424046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:31.618 [2024-07-12 09:10:06.657274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.251  Copying: 4096/4096 [B] (average 4000 kBps) 00:43:33.251 00:43:33.251 09:10:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 647c9eigvc3szchmtumng0o1vxzayz34g6ic40l40gefjowafknl5l4b3vmxlarxch2c6irucjn6tu5s0foeln9onwh2qemmqb3ppm8t4f32oadbnykp9gfb3viqoeq915zrtedoj3eryxz1sknjevhqasklcskutwbu9jq2lovuf57hpi0evg6us9guryjwmrsgo4hysdvon0dzbv59crme85gs74jd66ac6dvwzkixthr97ihlbh7s91ccdnumfpo3wimcqg1zoe4ih0hme42zb4nunmhgmuapk3u7gu1gcmrxvf33yki7g6xgj59shwng5kbxs0409dg20zmm3bjtpstw8hlra0rxlhjvudghxop3b0kt905ix6lxmtioeq9bj5b6glh3eopn0ln5wefngicqu5pnuvlh7ev6iaw6rc44tq6600640waynarrfyinnfpp98pjq2cjqcvmcxvfk3smnt58dplme3e56e4g97hn5ss0dnf3dixz0hegz34rdjso403nkzpn0ye03wrqm79wlp2qa1bv9z1gdrozldbw9s18ebi2t7u1xaf1u95yh4jzg5ueg0l37i2131q9cvc5ids6n6tbqg5mogq1pm2vpf1hy8avjc1izw9elqwjyhgjm7c60igfu3yq3ewe60rxc1ahjorui7t35wz7j1locd7jqhnlhxpm4t8d9cqlagezdjjbqspbwqmc3viiaskwa0kidxmxo314u8oqtzaekxuh3ognpml0zafq7jdmd5xinz7gv7z7d04m43nzpoeyndzmg2v1n5v8t4r2qdr695s6a2lsjudamkz6o2zyzcklei0cy6zlc7dyyh1mwqhw59dhe0bck8koy0k6sncvf34eob05v1lrvjlzf0vfkzuti3vaix2lxnxhg96hzl7sqloha1ldpzo0b5bymm2otetcylpq3qjrdhxlwcoanevythq0z7d1o6zslwjy870jxqs5zkvk6dh9srn17w3a2woo709ecbxspefwxweugnc6jqesnscpl20dss1xz5y86tvhihsa2h38jihxorb1a2pzafdomgq74myedcxlegammqj67sg6mc5h8anz52t4ff94h9g8limc45z7ir3ijjfdjj0bs9rq6sx2k24hli4p4s3rm2hgy5oxvocceedsveikchozcqzr53d1nrfh4kxckx6o61lrpqud72dg54p0sspm7a3rqeytdf8u5t0340ko051awp63tbhyvdw5gpw85joo0hv4se8e7dfn9m7ektxnpb6xs0szzftdz4ssa6b3w0jwf66t4bljpue7wl2k847nhoumttuw24pjxd3z9wiwzabwjzyomxvp6yvzcvgwn42lbh11xgrkojdx1fsv80q2ey6ln842itbnm0wzyfkn3ti7b5rz081dhfrqfj3a8772uvgnqohivczd109rqnrtmsrb1awzgelyktv6dxqc9ryamv1arb646bwjag00sbj1ab8dttshzls5gwjlg9q60heb6ed2szu90z8tjejo54jk88b29dbeps0f3tf8prl7jm0nie9quf8vee1lzxw1szx0h5v9kcrvo2xly1ssdp4s5cq09hcvwxph5ykt1gd7yup8kciwnv8igcqt2rdl63cylq8kkmfh1hrhf3pqcyy3gaamhvd2t2ytzev4vzitaat7j25d7qx2o8bnj30r9ylaf2spg9jegpz2bc9dd5e4ctgeyupx9k59hl10ulnplqv65n8zweaa0k4x5wflkigy1ipbd387sq0i2ordmodpvlhqlaxvkg0tlqdkt1qks6gnybd8caah64wuuhb6lbd9gvy8b082eipt6n3gg0f26o7btdtzaspsczvbxlow2fc39cl4mhc3vwayadhgpbx6ldpl1hjv6pezj8aoru9nv6tdzfbp2u8b8zwezpzltx0ap6dkv32u4mvw94lmk65q0xzobnlw4ijpyrc73im7ao8eraqf0qlibndkncgj8jdqy4y74wmos0bcklsr7xvwv0e0ao69ech40m5mlweqeexaf3jkxuyv8rg2pfot5c665aup5adnujrrdvft2papkpfdeag9jf9swp3p37su6iawele90dwkxiiy3ktp7eocvkmnc67b8f6cuziuag8rmgoys8aac58weqicwh28dkvogrz99460gmq0scospcy6xzb9grzf8e9xrk238ujq7feqgcxav2pds0bsnrn5c1ocuwwjytgpjd5bbydctyub2ph93gh5svjit3m0uvx5y0mxz2s2ktht54cnviq64couwrojs5afsccdxifboptb4pnf34w78aouyzi6ueya46p86v6sg6zvy16z0xqcjfub6s86lvwu7bh8xlm4ocbn851lctpl7t00cgzpjpf19n3j38j6zowewfkj1mdx746e0u1euod2okljj2wdvc6ordf9ljsnmc184quq94n1lev5eaz0bkwv6gbgwm6hg1arnwurzso4c206ynfcg5p3psw2r56bf742qf37p9vko775p80f91gfye2r2worw1qmfw86xoultgzq6skgq9o7soqdw6hf4vn31kyo1e89yg0q4x53psdnhs6rg70k12qjgr507cwqlmcdoymmq4cr9r93nvi6w47cwm5w1wboifxz8ilr37818g2yv57flhtg7d7kj3fyfv2s129sazhtswea517kmqp7tyf51z5njsm38b5tiy4esi1a3t1ax4ol3bdmatdxa58q1lql4u3wewwmzuza2zilxnof4zo6m73lsg3hqn8ev1n7fg16fxg6aiirinn1sst9au3ftetxlzwbksj2syxbifv2m4w65zh1th9tbe6khxqqmfk3kv5z581cu3vsrl50f1ujjdhpgd9f8euer374yqgr19s9gfpbazkep3fgzonlkipzig6qps4ecgnxo4y5uvd17mluoylqchyb78qgbyc11703kbfnob80ofxp9t1gkpfb9wc0vg3m1pet4jt8sdl8xf71fvwo437n9awqo7wyl3oqyye2cv6oxor1gt5dthed0t4v35q7peeymxjiy89tcbg8m689bjl01206thhjjfssljpib76lcxvgcmc6cu5a5dmubcfp7nq0w3rd347kesy2vnqaz81dlco5ueshug9v104rp3z4mraitee7fmn4dk1g142h4jl2wgcw737ug21i5k0wseajji33nyce34ztpra37oynrsra4ajbibhzhzduw16vcijurxmyc0v3u9klrac5hqcj2wc9yqng9xp1iyx0ax6a9yldmoce007mxiiw4pa7o9i9pu92h53gak87f0gpng41er6wkqg8xg3q5amqi8h44hp2tc01fq7eqk25lo486pyiljpz6lhpxdzhb034inndxgz30axpj0umhmwvicomf3vvofxqmsuuht3i902vussgzk6efvp4i9lysjk62hj02ucal7b289kecflej4xyi53axlir85zqeouqqjohdyi914y4w2w7zd7rm62aqsqlqszw03225yub8vh79cgop2gvv5toda3vi6pd0c7obtclvzvan9ez1o59zxvgsp89m7iplimhj5trjwsn6k9xm064ahivl9al1n3zduxxwg2x451ql8lxoupcaryobo4stwxt1u8ykaisst6pdc7jscxmaajn2nfdo2b3d8z6qw4wtpjfxhh4bqvj2u7frdd3m1kcyh45a2f35c06lyng1fmextwdrab3jf60rvpxx5dl9xcr3nxrrixo3f8o74yndpq12sj9cqq9ba1ahzl5kxasibnty9po5opmt03wzw9vq2atyptt9l3r5x3xq2yx2b8g4qh577bbfg50uor9h9xfkggumag1u27u7nflhcspdz0i8q66mp1o59g3ebp0nbsa607jkcemnu10eb6kyxvudz277z4mx5wq52cxfdr3x19sj0j7ohebht4exktmvkp6xcux4hu862a6jwjnqtbdtxdsy6m6mipbwwh26f6m9r63ywms9o08pxlv54abhhncnqisod7jwalh == \6\4\7\c\9\e\i\g\v\c\3\s\z\c\h\m\t\u\m\n\g\0\o\1\v\x\z\a\y\z\3\4\g\6\i\c\4\0\l\4\0\g\e\f\j\o\w\a\f\k\n\l\5\l\4\b\3\v\m\x\l\a\r\x\c\h\2\c\6\i\r\u\c\j\n\6\t\u\5\s\0\f\o\e\l\n\9\o\n\w\h\2\q\e\m\m\q\b\3\p\p\m\8\t\4\f\3\2\o\a\d\b\n\y\k\p\9\g\f\b\3\v\i\q\o\e\q\9\1\5\z\r\t\e\d\o\j\3\e\r\y\x\z\1\s\k\n\j\e\v\h\q\a\s\k\l\c\s\k\u\t\w\b\u\9\j\q\2\l\o\v\u\f\5\7\h\p\i\0\e\v\g\6\u\s\9\g\u\r\y\j\w\m\r\s\g\o\4\h\y\s\d\v\o\n\0\d\z\b\v\5\9\c\r\m\e\8\5\g\s\7\4\j\d\6\6\a\c\6\d\v\w\z\k\i\x\t\h\r\9\7\i\h\l\b\h\7\s\9\1\c\c\d\n\u\m\f\p\o\3\w\i\m\c\q\g\1\z\o\e\4\i\h\0\h\m\e\4\2\z\b\4\n\u\n\m\h\g\m\u\a\p\k\3\u\7\g\u\1\g\c\m\r\x\v\f\3\3\y\k\i\7\g\6\x\g\j\5\9\s\h\w\n\g\5\k\b\x\s\0\4\0\9\d\g\2\0\z\m\m\3\b\j\t\p\s\t\w\8\h\l\r\a\0\r\x\l\h\j\v\u\d\g\h\x\o\p\3\b\0\k\t\9\0\5\i\x\6\l\x\m\t\i\o\e\q\9\b\j\5\b\6\g\l\h\3\e\o\p\n\0\l\n\5\w\e\f\n\g\i\c\q\u\5\p\n\u\v\l\h\7\e\v\6\i\a\w\6\r\c\4\4\t\q\6\6\0\0\6\4\0\w\a\y\n\a\r\r\f\y\i\n\n\f\p\p\9\8\p\j\q\2\c\j\q\c\v\m\c\x\v\f\k\3\s\m\n\t\5\8\d\p\l\m\e\3\e\5\6\e\4\g\9\7\h\n\5\s\s\0\d\n\f\3\d\i\x\z\0\h\e\g\z\3\4\r\d\j\s\o\4\0\3\n\k\z\p\n\0\y\e\0\3\w\r\q\m\7\9\w\l\p\2\q\a\1\b\v\9\z\1\g\d\r\o\z\l\d\b\w\9\s\1\8\e\b\i\2\t\7\u\1\x\a\f\1\u\9\5\y\h\4\j\z\g\5\u\e\g\0\l\3\7\i\2\1\3\1\q\9\c\v\c\5\i\d\s\6\n\6\t\b\q\g\5\m\o\g\q\1\p\m\2\v\p\f\1\h\y\8\a\v\j\c\1\i\z\w\9\e\l\q\w\j\y\h\g\j\m\7\c\6\0\i\g\f\u\3\y\q\3\e\w\e\6\0\r\x\c\1\a\h\j\o\r\u\i\7\t\3\5\w\z\7\j\1\l\o\c\d\7\j\q\h\n\l\h\x\p\m\4\t\8\d\9\c\q\l\a\g\e\z\d\j\j\b\q\s\p\b\w\q\m\c\3\v\i\i\a\s\k\w\a\0\k\i\d\x\m\x\o\3\1\4\u\8\o\q\t\z\a\e\k\x\u\h\3\o\g\n\p\m\l\0\z\a\f\q\7\j\d\m\d\5\x\i\n\z\7\g\v\7\z\7\d\0\4\m\4\3\n\z\p\o\e\y\n\d\z\m\g\2\v\1\n\5\v\8\t\4\r\2\q\d\r\6\9\5\s\6\a\2\l\s\j\u\d\a\m\k\z\6\o\2\z\y\z\c\k\l\e\i\0\c\y\6\z\l\c\7\d\y\y\h\1\m\w\q\h\w\5\9\d\h\e\0\b\c\k\8\k\o\y\0\k\6\s\n\c\v\f\3\4\e\o\b\0\5\v\1\l\r\v\j\l\z\f\0\v\f\k\z\u\t\i\3\v\a\i\x\2\l\x\n\x\h\g\9\6\h\z\l\7\s\q\l\o\h\a\1\l\d\p\z\o\0\b\5\b\y\m\m\2\o\t\e\t\c\y\l\p\q\3\q\j\r\d\h\x\l\w\c\o\a\n\e\v\y\t\h\q\0\z\7\d\1\o\6\z\s\l\w\j\y\8\7\0\j\x\q\s\5\z\k\v\k\6\d\h\9\s\r\n\1\7\w\3\a\2\w\o\o\7\0\9\e\c\b\x\s\p\e\f\w\x\w\e\u\g\n\c\6\j\q\e\s\n\s\c\p\l\2\0\d\s\s\1\x\z\5\y\8\6\t\v\h\i\h\s\a\2\h\3\8\j\i\h\x\o\r\b\1\a\2\p\z\a\f\d\o\m\g\q\7\4\m\y\e\d\c\x\l\e\g\a\m\m\q\j\6\7\s\g\6\m\c\5\h\8\a\n\z\5\2\t\4\f\f\9\4\h\9\g\8\l\i\m\c\4\5\z\7\i\r\3\i\j\j\f\d\j\j\0\b\s\9\r\q\6\s\x\2\k\2\4\h\l\i\4\p\4\s\3\r\m\2\h\g\y\5\o\x\v\o\c\c\e\e\d\s\v\e\i\k\c\h\o\z\c\q\z\r\5\3\d\1\n\r\f\h\4\k\x\c\k\x\6\o\6\1\l\r\p\q\u\d\7\2\d\g\5\4\p\0\s\s\p\m\7\a\3\r\q\e\y\t\d\f\8\u\5\t\0\3\4\0\k\o\0\5\1\a\w\p\6\3\t\b\h\y\v\d\w\5\g\p\w\8\5\j\o\o\0\h\v\4\s\e\8\e\7\d\f\n\9\m\7\e\k\t\x\n\p\b\6\x\s\0\s\z\z\f\t\d\z\4\s\s\a\6\b\3\w\0\j\w\f\6\6\t\4\b\l\j\p\u\e\7\w\l\2\k\8\4\7\n\h\o\u\m\t\t\u\w\2\4\p\j\x\d\3\z\9\w\i\w\z\a\b\w\j\z\y\o\m\x\v\p\6\y\v\z\c\v\g\w\n\4\2\l\b\h\1\1\x\g\r\k\o\j\d\x\1\f\s\v\8\0\q\2\e\y\6\l\n\8\4\2\i\t\b\n\m\0\w\z\y\f\k\n\3\t\i\7\b\5\r\z\0\8\1\d\h\f\r\q\f\j\3\a\8\7\7\2\u\v\g\n\q\o\h\i\v\c\z\d\1\0\9\r\q\n\r\t\m\s\r\b\1\a\w\z\g\e\l\y\k\t\v\6\d\x\q\c\9\r\y\a\m\v\1\a\r\b\6\4\6\b\w\j\a\g\0\0\s\b\j\1\a\b\8\d\t\t\s\h\z\l\s\5\g\w\j\l\g\9\q\6\0\h\e\b\6\e\d\2\s\z\u\9\0\z\8\t\j\e\j\o\5\4\j\k\8\8\b\2\9\d\b\e\p\s\0\f\3\t\f\8\p\r\l\7\j\m\0\n\i\e\9\q\u\f\8\v\e\e\1\l\z\x\w\1\s\z\x\0\h\5\v\9\k\c\r\v\o\2\x\l\y\1\s\s\d\p\4\s\5\c\q\0\9\h\c\v\w\x\p\h\5\y\k\t\1\g\d\7\y\u\p\8\k\c\i\w\n\v\8\i\g\c\q\t\2\r\d\l\6\3\c\y\l\q\8\k\k\m\f\h\1\h\r\h\f\3\p\q\c\y\y\3\g\a\a\m\h\v\d\2\t\2\y\t\z\e\v\4\v\z\i\t\a\a\t\7\j\2\5\d\7\q\x\2\o\8\b\n\j\3\0\r\9\y\l\a\f\2\s\p\g\9\j\e\g\p\z\2\b\c\9\d\d\5\e\4\c\t\g\e\y\u\p\x\9\k\5\9\h\l\1\0\u\l\n\p\l\q\v\6\5\n\8\z\w\e\a\a\0\k\4\x\5\w\f\l\k\i\g\y\1\i\p\b\d\3\8\7\s\q\0\i\2\o\r\d\m\o\d\p\v\l\h\q\l\a\x\v\k\g\0\t\l\q\d\k\t\1\q\k\s\6\g\n\y\b\d\8\c\a\a\h\6\4\w\u\u\h\b\6\l\b\d\9\g\v\y\8\b\0\8\2\e\i\p\t\6\n\3\g\g\0\f\2\6\o\7\b\t\d\t\z\a\s\p\s\c\z\v\b\x\l\o\w\2\f\c\3\9\c\l\4\m\h\c\3\v\w\a\y\a\d\h\g\p\b\x\6\l\d\p\l\1\h\j\v\6\p\e\z\j\8\a\o\r\u\9\n\v\6\t\d\z\f\b\p\2\u\8\b\8\z\w\e\z\p\z\l\t\x\0\a\p\6\d\k\v\3\2\u\4\m\v\w\9\4\l\m\k\6\5\q\0\x\z\o\b\n\l\w\4\i\j\p\y\r\c\7\3\i\m\7\a\o\8\e\r\a\q\f\0\q\l\i\b\n\d\k\n\c\g\j\8\j\d\q\y\4\y\7\4\w\m\o\s\0\b\c\k\l\s\r\7\x\v\w\v\0\e\0\a\o\6\9\e\c\h\4\0\m\5\m\l\w\e\q\e\e\x\a\f\3\j\k\x\u\y\v\8\r\g\2\p\f\o\t\5\c\6\6\5\a\u\p\5\a\d\n\u\j\r\r\d\v\f\t\2\p\a\p\k\p\f\d\e\a\g\9\j\f\9\s\w\p\3\p\3\7\s\u\6\i\a\w\e\l\e\9\0\d\w\k\x\i\i\y\3\k\t\p\7\e\o\c\v\k\m\n\c\6\7\b\8\f\6\c\u\z\i\u\a\g\8\r\m\g\o\y\s\8\a\a\c\5\8\w\e\q\i\c\w\h\2\8\d\k\v\o\g\r\z\9\9\4\6\0\g\m\q\0\s\c\o\s\p\c\y\6\x\z\b\9\g\r\z\f\8\e\9\x\r\k\2\3\8\u\j\q\7\f\e\q\g\c\x\a\v\2\p\d\s\0\b\s\n\r\n\5\c\1\o\c\u\w\w\j\y\t\g\p\j\d\5\b\b\y\d\c\t\y\u\b\2\p\h\9\3\g\h\5\s\v\j\i\t\3\m\0\u\v\x\5\y\0\m\x\z\2\s\2\k\t\h\t\5\4\c\n\v\i\q\6\4\c\o\u\w\r\o\j\s\5\a\f\s\c\c\d\x\i\f\b\o\p\t\b\4\p\n\f\3\4\w\7\8\a\o\u\y\z\i\6\u\e\y\a\4\6\p\8\6\v\6\s\g\6\z\v\y\1\6\z\0\x\q\c\j\f\u\b\6\s\8\6\l\v\w\u\7\b\h\8\x\l\m\4\o\c\b\n\8\5\1\l\c\t\p\l\7\t\0\0\c\g\z\p\j\p\f\1\9\n\3\j\3\8\j\6\z\o\w\e\w\f\k\j\1\m\d\x\7\4\6\e\0\u\1\e\u\o\d\2\o\k\l\j\j\2\w\d\v\c\6\o\r\d\f\9\l\j\s\n\m\c\1\8\4\q\u\q\9\4\n\1\l\e\v\5\e\a\z\0\b\k\w\v\6\g\b\g\w\m\6\h\g\1\a\r\n\w\u\r\z\s\o\4\c\2\0\6\y\n\f\c\g\5\p\3\p\s\w\2\r\5\6\b\f\7\4\2\q\f\3\7\p\9\v\k\o\7\7\5\p\8\0\f\9\1\g\f\y\e\2\r\2\w\o\r\w\1\q\m\f\w\8\6\x\o\u\l\t\g\z\q\6\s\k\g\q\9\o\7\s\o\q\d\w\6\h\f\4\v\n\3\1\k\y\o\1\e\8\9\y\g\0\q\4\x\5\3\p\s\d\n\h\s\6\r\g\7\0\k\1\2\q\j\g\r\5\0\7\c\w\q\l\m\c\d\o\y\m\m\q\4\c\r\9\r\9\3\n\v\i\6\w\4\7\c\w\m\5\w\1\w\b\o\i\f\x\z\8\i\l\r\3\7\8\1\8\g\2\y\v\5\7\f\l\h\t\g\7\d\7\k\j\3\f\y\f\v\2\s\1\2\9\s\a\z\h\t\s\w\e\a\5\1\7\k\m\q\p\7\t\y\f\5\1\z\5\n\j\s\m\3\8\b\5\t\i\y\4\e\s\i\1\a\3\t\1\a\x\4\o\l\3\b\d\m\a\t\d\x\a\5\8\q\1\l\q\l\4\u\3\w\e\w\w\m\z\u\z\a\2\z\i\l\x\n\o\f\4\z\o\6\m\7\3\l\s\g\3\h\q\n\8\e\v\1\n\7\f\g\1\6\f\x\g\6\a\i\i\r\i\n\n\1\s\s\t\9\a\u\3\f\t\e\t\x\l\z\w\b\k\s\j\2\s\y\x\b\i\f\v\2\m\4\w\6\5\z\h\1\t\h\9\t\b\e\6\k\h\x\q\q\m\f\k\3\k\v\5\z\5\8\1\c\u\3\v\s\r\l\5\0\f\1\u\j\j\d\h\p\g\d\9\f\8\e\u\e\r\3\7\4\y\q\g\r\1\9\s\9\g\f\p\b\a\z\k\e\p\3\f\g\z\o\n\l\k\i\p\z\i\g\6\q\p\s\4\e\c\g\n\x\o\4\y\5\u\v\d\1\7\m\l\u\o\y\l\q\c\h\y\b\7\8\q\g\b\y\c\1\1\7\0\3\k\b\f\n\o\b\8\0\o\f\x\p\9\t\1\g\k\p\f\b\9\w\c\0\v\g\3\m\1\p\e\t\4\j\t\8\s\d\l\8\x\f\7\1\f\v\w\o\4\3\7\n\9\a\w\q\o\7\w\y\l\3\o\q\y\y\e\2\c\v\6\o\x\o\r\1\g\t\5\d\t\h\e\d\0\t\4\v\3\5\q\7\p\e\e\y\m\x\j\i\y\8\9\t\c\b\g\8\m\6\8\9\b\j\l\0\1\2\0\6\t\h\h\j\j\f\s\s\l\j\p\i\b\7\6\l\c\x\v\g\c\m\c\6\c\u\5\a\5\d\m\u\b\c\f\p\7\n\q\0\w\3\r\d\3\4\7\k\e\s\y\2\v\n\q\a\z\8\1\d\l\c\o\5\u\e\s\h\u\g\9\v\1\0\4\r\p\3\z\4\m\r\a\i\t\e\e\7\f\m\n\4\d\k\1\g\1\4\2\h\4\j\l\2\w\g\c\w\7\3\7\u\g\2\1\i\5\k\0\w\s\e\a\j\j\i\3\3\n\y\c\e\3\4\z\t\p\r\a\3\7\o\y\n\r\s\r\a\4\a\j\b\i\b\h\z\h\z\d\u\w\1\6\v\c\i\j\u\r\x\m\y\c\0\v\3\u\9\k\l\r\a\c\5\h\q\c\j\2\w\c\9\y\q\n\g\9\x\p\1\i\y\x\0\a\x\6\a\9\y\l\d\m\o\c\e\0\0\7\m\x\i\i\w\4\p\a\7\o\9\i\9\p\u\9\2\h\5\3\g\a\k\8\7\f\0\g\p\n\g\4\1\e\r\6\w\k\q\g\8\x\g\3\q\5\a\m\q\i\8\h\4\4\h\p\2\t\c\0\1\f\q\7\e\q\k\2\5\l\o\4\8\6\p\y\i\l\j\p\z\6\l\h\p\x\d\z\h\b\0\3\4\i\n\n\d\x\g\z\3\0\a\x\p\j\0\u\m\h\m\w\v\i\c\o\m\f\3\v\v\o\f\x\q\m\s\u\u\h\t\3\i\9\0\2\v\u\s\s\g\z\k\6\e\f\v\p\4\i\9\l\y\s\j\k\6\2\h\j\0\2\u\c\a\l\7\b\2\8\9\k\e\c\f\l\e\j\4\x\y\i\5\3\a\x\l\i\r\8\5\z\q\e\o\u\q\q\j\o\h\d\y\i\9\1\4\y\4\w\2\w\7\z\d\7\r\m\6\2\a\q\s\q\l\q\s\z\w\0\3\2\2\5\y\u\b\8\v\h\7\9\c\g\o\p\2\g\v\v\5\t\o\d\a\3\v\i\6\p\d\0\c\7\o\b\t\c\l\v\z\v\a\n\9\e\z\1\o\5\9\z\x\v\g\s\p\8\9\m\7\i\p\l\i\m\h\j\5\t\r\j\w\s\n\6\k\9\x\m\0\6\4\a\h\i\v\l\9\a\l\1\n\3\z\d\u\x\x\w\g\2\x\4\5\1\q\l\8\l\x\o\u\p\c\a\r\y\o\b\o\4\s\t\w\x\t\1\u\8\y\k\a\i\s\s\t\6\p\d\c\7\j\s\c\x\m\a\a\j\n\2\n\f\d\o\2\b\3\d\8\z\6\q\w\4\w\t\p\j\f\x\h\h\4\b\q\v\j\2\u\7\f\r\d\d\3\m\1\k\c\y\h\4\5\a\2\f\3\5\c\0\6\l\y\n\g\1\f\m\e\x\t\w\d\r\a\b\3\j\f\6\0\r\v\p\x\x\5\d\l\9\x\c\r\3\n\x\r\r\i\x\o\3\f\8\o\7\4\y\n\d\p\q\1\2\s\j\9\c\q\q\9\b\a\1\a\h\z\l\5\k\x\a\s\i\b\n\t\y\9\p\o\5\o\p\m\t\0\3\w\z\w\9\v\q\2\a\t\y\p\t\t\9\l\3\r\5\x\3\x\q\2\y\x\2\b\8\g\4\q\h\5\7\7\b\b\f\g\5\0\u\o\r\9\h\9\x\f\k\g\g\u\m\a\g\1\u\2\7\u\7\n\f\l\h\c\s\p\d\z\0\i\8\q\6\6\m\p\1\o\5\9\g\3\e\b\p\0\n\b\s\a\6\0\7\j\k\c\e\m\n\u\1\0\e\b\6\k\y\x\v\u\d\z\2\7\7\z\4\m\x\5\w\q\5\2\c\x\f\d\r\3\x\1\9\s\j\0\j\7\o\h\e\b\h\t\4\e\x\k\t\m\v\k\p\6\x\c\u\x\4\h\u\8\6\2\a\6\j\w\j\n\q\t\b\d\t\x\d\s\y\6\m\6\m\i\p\b\w\w\h\2\6\f\6\m\9\r\6\3\y\w\m\s\9\o\0\8\p\x\l\v\5\4\a\b\h\h\n\c\n\q\i\s\o\d\7\j\w\a\l\h ]] 00:43:33.252 00:43:33.252 real 0m4.104s 00:43:33.252 user 0m3.436s 00:43:33.252 sys 0m0.520s 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:43:33.252 ************************************ 00:43:33.252 END TEST dd_rw_offset 00:43:33.252 ************************************ 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:43:33.252 09:10:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:43:33.252 { 00:43:33.252 "subsystems": [ 00:43:33.252 { 00:43:33.252 "subsystem": "bdev", 00:43:33.252 "config": [ 00:43:33.252 { 00:43:33.252 "params": { 00:43:33.252 "trtype": "pcie", 00:43:33.252 "traddr": "0000:00:10.0", 00:43:33.252 "name": "Nvme0" 00:43:33.252 }, 00:43:33.252 "method": "bdev_nvme_attach_controller" 00:43:33.252 }, 00:43:33.252 { 00:43:33.252 "method": "bdev_wait_for_examine" 00:43:33.252 } 00:43:33.252 ] 00:43:33.252 } 00:43:33.252 ] 00:43:33.252 } 00:43:33.252 [2024-07-12 09:10:08.381011] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:33.252 [2024-07-12 09:10:08.381410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169517 ] 00:43:33.510 [2024-07-12 09:10:08.544395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:33.766 [2024-07-12 09:10:08.769431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:35.397  Copying: 1024/1024 [kB] (average 1000 MBps) 00:43:35.397 00:43:35.397 09:10:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:35.397 ************************************ 00:43:35.397 END TEST spdk_dd_basic_rw 00:43:35.397 ************************************ 00:43:35.397 00:43:35.397 real 0m48.690s 00:43:35.397 user 0m40.976s 00:43:35.397 sys 0m5.917s 00:43:35.397 09:10:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:35.397 09:10:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:43:35.397 09:10:10 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:43:35.397 09:10:10 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:43:35.397 09:10:10 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:35.397 09:10:10 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:35.397 09:10:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:43:35.397 ************************************ 00:43:35.397 START TEST spdk_dd_posix 00:43:35.397 ************************************ 00:43:35.397 09:10:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:43:35.397 * Looking for test storage... 00:43:35.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:43:35.397 09:10:10 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:35.397 09:10:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:35.397 09:10:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:35.397 09:10:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:43:35.398 * First test run, using AIO 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:43:35.398 ************************************ 00:43:35.398 START TEST dd_flag_append 00:43:35.398 ************************************ 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=5egnlabx4sof6duzmpvkeb0bi9p7s0cr 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=cmkvbm02qh0gri0rqtzfsrlsg0szwa46 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 5egnlabx4sof6duzmpvkeb0bi9p7s0cr 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s cmkvbm02qh0gri0rqtzfsrlsg0szwa46 00:43:35.398 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:43:35.398 [2024-07-12 09:10:10.560158] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:35.398 [2024-07-12 09:10:10.560644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169615 ] 00:43:35.656 [2024-07-12 09:10:10.732428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:35.914 [2024-07-12 09:10:11.025877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.549  Copying: 32/32 [B] (average 31 kBps) 00:43:37.549 00:43:37.549 ************************************ 00:43:37.549 END TEST dd_flag_append 00:43:37.549 ************************************ 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ cmkvbm02qh0gri0rqtzfsrlsg0szwa465egnlabx4sof6duzmpvkeb0bi9p7s0cr == \c\m\k\v\b\m\0\2\q\h\0\g\r\i\0\r\q\t\z\f\s\r\l\s\g\0\s\z\w\a\4\6\5\e\g\n\l\a\b\x\4\s\o\f\6\d\u\z\m\p\v\k\e\b\0\b\i\9\p\7\s\0\c\r ]] 00:43:37.549 00:43:37.549 real 0m2.033s 00:43:37.549 user 0m1.625s 00:43:37.549 sys 0m0.276s 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:43:37.549 ************************************ 00:43:37.549 START TEST dd_flag_directory 00:43:37.549 ************************************ 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:37.549 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:37.549 [2024-07-12 09:10:12.634365] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:37.549 [2024-07-12 09:10:12.634799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169663 ] 00:43:37.808 [2024-07-12 09:10:12.794088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:38.066 [2024-07-12 09:10:13.074971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:38.323 [2024-07-12 09:10:13.382188] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:43:38.323 [2024-07-12 09:10:13.382445] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:43:38.324 [2024-07-12 09:10:13.382586] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:39.258 [2024-07-12 09:10:14.115195] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:39.516 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:43:39.516 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:39.516 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:39.517 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:43:39.517 [2024-07-12 09:10:14.603492] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:39.517 [2024-07-12 09:10:14.604102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169691 ] 00:43:39.775 [2024-07-12 09:10:14.788255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:40.033 [2024-07-12 09:10:15.044560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:40.335 [2024-07-12 09:10:15.351356] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:43:40.335 [2024-07-12 09:10:15.351729] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:43:40.335 [2024-07-12 09:10:15.351897] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:40.902 [2024-07-12 09:10:16.075876] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:41.479 ************************************ 00:43:41.479 END TEST dd_flag_directory 00:43:41.479 ************************************ 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:41.479 00:43:41.479 real 0m3.922s 00:43:41.479 user 0m3.202s 00:43:41.479 sys 0m0.508s 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:43:41.479 ************************************ 00:43:41.479 START TEST dd_flag_nofollow 00:43:41.479 ************************************ 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:41.479 09:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:41.479 [2024-07-12 09:10:16.601693] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:41.479 [2024-07-12 09:10:16.602139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169742 ] 00:43:41.737 [2024-07-12 09:10:16.768087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:41.996 [2024-07-12 09:10:16.988006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:42.255 [2024-07-12 09:10:17.306225] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:43:42.255 [2024-07-12 09:10:17.306557] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:43:42.255 [2024-07-12 09:10:17.306711] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:43.190 [2024-07-12 09:10:18.040225] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:43:43.449 09:10:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:43:43.449 [2024-07-12 09:10:18.521834] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:43.449 [2024-07-12 09:10:18.522345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169768 ] 00:43:43.708 [2024-07-12 09:10:18.692884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:43.966 [2024-07-12 09:10:18.911972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:44.274 [2024-07-12 09:10:19.225421] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:43:44.274 [2024-07-12 09:10:19.225828] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:43:44.274 [2024-07-12 09:10:19.225977] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:44.854 [2024-07-12 09:10:19.953750] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:43:45.420 09:10:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:45.420 [2024-07-12 09:10:20.441047] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:45.420 [2024-07-12 09:10:20.441527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169810 ] 00:43:45.420 [2024-07-12 09:10:20.610665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:45.679 [2024-07-12 09:10:20.848408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:47.175  Copying: 512/512 [B] (average 500 kBps) 00:43:47.175 00:43:47.175 ************************************ 00:43:47.175 END TEST dd_flag_nofollow 00:43:47.175 ************************************ 00:43:47.175 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ w18qcst0hd6ayb3h52qqibr4zoo9zwhlpyiz4msrtseu8fxjnduyqp3g61dsiml03w2u05c21tveaj55qs1nodok70f0c5j0nabr4x8vqxqjp0gulq94pxbeq9ks92h80bpry9gx9kmduofr8x875m9dca5vah6ifbjcjut8qh7sym2vpjo3l9gl36sq9k79kritxn3j2vsz8gs10i79fdj5d3p06ddugsriuup8nojjqkwlcwyn4v81q5p03uxc68zndvd3o4v3xp3dbnmk24hodmy71pjg8eei5xnj4lcr3n0ioh3prt3ntdr6gcwb0t4rlnpoedhe837g9u6lmee0m10uz15p515nzukf913glax54qcrdgc9ahhvviy5sr5172m6a6xxbiuos56jnaz7ndgwmipb84vb1jo54f19hh3phnc54yecxkk7jsp3ypbw5uo8l63l84vm2heor4a1gai8xrbmlqdzur67tlk891xr9vz5n3icnep37mzq == \w\1\8\q\c\s\t\0\h\d\6\a\y\b\3\h\5\2\q\q\i\b\r\4\z\o\o\9\z\w\h\l\p\y\i\z\4\m\s\r\t\s\e\u\8\f\x\j\n\d\u\y\q\p\3\g\6\1\d\s\i\m\l\0\3\w\2\u\0\5\c\2\1\t\v\e\a\j\5\5\q\s\1\n\o\d\o\k\7\0\f\0\c\5\j\0\n\a\b\r\4\x\8\v\q\x\q\j\p\0\g\u\l\q\9\4\p\x\b\e\q\9\k\s\9\2\h\8\0\b\p\r\y\9\g\x\9\k\m\d\u\o\f\r\8\x\8\7\5\m\9\d\c\a\5\v\a\h\6\i\f\b\j\c\j\u\t\8\q\h\7\s\y\m\2\v\p\j\o\3\l\9\g\l\3\6\s\q\9\k\7\9\k\r\i\t\x\n\3\j\2\v\s\z\8\g\s\1\0\i\7\9\f\d\j\5\d\3\p\0\6\d\d\u\g\s\r\i\u\u\p\8\n\o\j\j\q\k\w\l\c\w\y\n\4\v\8\1\q\5\p\0\3\u\x\c\6\8\z\n\d\v\d\3\o\4\v\3\x\p\3\d\b\n\m\k\2\4\h\o\d\m\y\7\1\p\j\g\8\e\e\i\5\x\n\j\4\l\c\r\3\n\0\i\o\h\3\p\r\t\3\n\t\d\r\6\g\c\w\b\0\t\4\r\l\n\p\o\e\d\h\e\8\3\7\g\9\u\6\l\m\e\e\0\m\1\0\u\z\1\5\p\5\1\5\n\z\u\k\f\9\1\3\g\l\a\x\5\4\q\c\r\d\g\c\9\a\h\h\v\v\i\y\5\s\r\5\1\7\2\m\6\a\6\x\x\b\i\u\o\s\5\6\j\n\a\z\7\n\d\g\w\m\i\p\b\8\4\v\b\1\j\o\5\4\f\1\9\h\h\3\p\h\n\c\5\4\y\e\c\x\k\k\7\j\s\p\3\y\p\b\w\5\u\o\8\l\6\3\l\8\4\v\m\2\h\e\o\r\4\a\1\g\a\i\8\x\r\b\m\l\q\d\z\u\r\6\7\t\l\k\8\9\1\x\r\9\v\z\5\n\3\i\c\n\e\p\3\7\m\z\q ]] 00:43:47.175 00:43:47.175 real 0m5.814s 00:43:47.175 user 0m4.791s 00:43:47.175 sys 0m0.678s 00:43:47.175 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:47.175 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:43:47.433 ************************************ 00:43:47.433 START TEST dd_flag_noatime 00:43:47.433 ************************************ 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720775421 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720775422 00:43:47.433 09:10:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:43:48.367 09:10:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:48.367 [2024-07-12 09:10:23.487786] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:48.367 [2024-07-12 09:10:23.488326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169874 ] 00:43:48.624 [2024-07-12 09:10:23.657094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:48.882 [2024-07-12 09:10:23.918481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.514  Copying: 512/512 [B] (average 500 kBps) 00:43:50.514 00:43:50.514 09:10:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:50.514 09:10:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720775421 )) 00:43:50.514 09:10:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:50.514 09:10:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720775422 )) 00:43:50.514 09:10:25 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:43:50.514 [2024-07-12 09:10:25.505258] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:50.514 [2024-07-12 09:10:25.506578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169901 ] 00:43:50.514 [2024-07-12 09:10:25.680906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:50.772 [2024-07-12 09:10:25.897040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.400  Copying: 512/512 [B] (average 500 kBps) 00:43:52.400 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:43:52.400 ************************************ 00:43:52.400 END TEST dd_flag_noatime 00:43:52.400 ************************************ 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720775426 )) 00:43:52.400 00:43:52.400 real 0m4.973s 00:43:52.400 user 0m3.152s 00:43:52.400 sys 0m0.520s 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:43:52.400 ************************************ 00:43:52.400 START TEST dd_flags_misc 00:43:52.400 ************************************ 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:43:52.400 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:43:52.400 [2024-07-12 09:10:27.497514] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:52.400 [2024-07-12 09:10:27.497929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169944 ] 00:43:52.658 [2024-07-12 09:10:27.661287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:52.917 [2024-07-12 09:10:27.965375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:54.552  Copying: 512/512 [B] (average 500 kBps) 00:43:54.552 00:43:54.552 09:10:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xbka0szf08zhwpgw46cwluxtpvwpv1lo6jijmbnll63gc3z471iu8t47ysdn2pb0tq5onqdca6802ujap5q989qrjwdmfpngikyygypwqw1grgbcygbibf5dn1nyvdn6nhxauqlm1srkyfhp0vzbtmphvwefehlvpv7jciyu7qnqnqhogirguffc4gvvkvu3b7pqnjo1ccew8oy2bh1htzjiuy6w61xbcwkjjqwrl5g8e1f0bx1k9zk6ozae56w6h26w3kb6ovqn38p9yo5g1npg6hwrcmdbdi48dk0fppjf070icvei59k00ifq7rkm9orpw4xyyfvm5gn51801tqj9hrys3yz3mxnc2faxznais8fhqrjxfvhweffvrrz32nr7yoee4d2qbyiudm8knocnvbiiu0pu2axkcr421j3a298saw3280yzpd8tx8nrvb5p2b2khvjw6p8efw4agu6qpe32w131l1cp9wi7hyr8yawh8t7mt6y52rdkntdf == \x\b\k\a\0\s\z\f\0\8\z\h\w\p\g\w\4\6\c\w\l\u\x\t\p\v\w\p\v\1\l\o\6\j\i\j\m\b\n\l\l\6\3\g\c\3\z\4\7\1\i\u\8\t\4\7\y\s\d\n\2\p\b\0\t\q\5\o\n\q\d\c\a\6\8\0\2\u\j\a\p\5\q\9\8\9\q\r\j\w\d\m\f\p\n\g\i\k\y\y\g\y\p\w\q\w\1\g\r\g\b\c\y\g\b\i\b\f\5\d\n\1\n\y\v\d\n\6\n\h\x\a\u\q\l\m\1\s\r\k\y\f\h\p\0\v\z\b\t\m\p\h\v\w\e\f\e\h\l\v\p\v\7\j\c\i\y\u\7\q\n\q\n\q\h\o\g\i\r\g\u\f\f\c\4\g\v\v\k\v\u\3\b\7\p\q\n\j\o\1\c\c\e\w\8\o\y\2\b\h\1\h\t\z\j\i\u\y\6\w\6\1\x\b\c\w\k\j\j\q\w\r\l\5\g\8\e\1\f\0\b\x\1\k\9\z\k\6\o\z\a\e\5\6\w\6\h\2\6\w\3\k\b\6\o\v\q\n\3\8\p\9\y\o\5\g\1\n\p\g\6\h\w\r\c\m\d\b\d\i\4\8\d\k\0\f\p\p\j\f\0\7\0\i\c\v\e\i\5\9\k\0\0\i\f\q\7\r\k\m\9\o\r\p\w\4\x\y\y\f\v\m\5\g\n\5\1\8\0\1\t\q\j\9\h\r\y\s\3\y\z\3\m\x\n\c\2\f\a\x\z\n\a\i\s\8\f\h\q\r\j\x\f\v\h\w\e\f\f\v\r\r\z\3\2\n\r\7\y\o\e\e\4\d\2\q\b\y\i\u\d\m\8\k\n\o\c\n\v\b\i\i\u\0\p\u\2\a\x\k\c\r\4\2\1\j\3\a\2\9\8\s\a\w\3\2\8\0\y\z\p\d\8\t\x\8\n\r\v\b\5\p\2\b\2\k\h\v\j\w\6\p\8\e\f\w\4\a\g\u\6\q\p\e\3\2\w\1\3\1\l\1\c\p\9\w\i\7\h\y\r\8\y\a\w\h\8\t\7\m\t\6\y\5\2\r\d\k\n\t\d\f ]] 00:43:54.552 09:10:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:43:54.552 09:10:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:43:54.552 [2024-07-12 09:10:29.549117] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:54.552 [2024-07-12 09:10:29.549999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169973 ] 00:43:54.552 [2024-07-12 09:10:29.720293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:54.810 [2024-07-12 09:10:29.972070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:56.311  Copying: 512/512 [B] (average 500 kBps) 00:43:56.311 00:43:56.311 09:10:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xbka0szf08zhwpgw46cwluxtpvwpv1lo6jijmbnll63gc3z471iu8t47ysdn2pb0tq5onqdca6802ujap5q989qrjwdmfpngikyygypwqw1grgbcygbibf5dn1nyvdn6nhxauqlm1srkyfhp0vzbtmphvwefehlvpv7jciyu7qnqnqhogirguffc4gvvkvu3b7pqnjo1ccew8oy2bh1htzjiuy6w61xbcwkjjqwrl5g8e1f0bx1k9zk6ozae56w6h26w3kb6ovqn38p9yo5g1npg6hwrcmdbdi48dk0fppjf070icvei59k00ifq7rkm9orpw4xyyfvm5gn51801tqj9hrys3yz3mxnc2faxznais8fhqrjxfvhweffvrrz32nr7yoee4d2qbyiudm8knocnvbiiu0pu2axkcr421j3a298saw3280yzpd8tx8nrvb5p2b2khvjw6p8efw4agu6qpe32w131l1cp9wi7hyr8yawh8t7mt6y52rdkntdf == \x\b\k\a\0\s\z\f\0\8\z\h\w\p\g\w\4\6\c\w\l\u\x\t\p\v\w\p\v\1\l\o\6\j\i\j\m\b\n\l\l\6\3\g\c\3\z\4\7\1\i\u\8\t\4\7\y\s\d\n\2\p\b\0\t\q\5\o\n\q\d\c\a\6\8\0\2\u\j\a\p\5\q\9\8\9\q\r\j\w\d\m\f\p\n\g\i\k\y\y\g\y\p\w\q\w\1\g\r\g\b\c\y\g\b\i\b\f\5\d\n\1\n\y\v\d\n\6\n\h\x\a\u\q\l\m\1\s\r\k\y\f\h\p\0\v\z\b\t\m\p\h\v\w\e\f\e\h\l\v\p\v\7\j\c\i\y\u\7\q\n\q\n\q\h\o\g\i\r\g\u\f\f\c\4\g\v\v\k\v\u\3\b\7\p\q\n\j\o\1\c\c\e\w\8\o\y\2\b\h\1\h\t\z\j\i\u\y\6\w\6\1\x\b\c\w\k\j\j\q\w\r\l\5\g\8\e\1\f\0\b\x\1\k\9\z\k\6\o\z\a\e\5\6\w\6\h\2\6\w\3\k\b\6\o\v\q\n\3\8\p\9\y\o\5\g\1\n\p\g\6\h\w\r\c\m\d\b\d\i\4\8\d\k\0\f\p\p\j\f\0\7\0\i\c\v\e\i\5\9\k\0\0\i\f\q\7\r\k\m\9\o\r\p\w\4\x\y\y\f\v\m\5\g\n\5\1\8\0\1\t\q\j\9\h\r\y\s\3\y\z\3\m\x\n\c\2\f\a\x\z\n\a\i\s\8\f\h\q\r\j\x\f\v\h\w\e\f\f\v\r\r\z\3\2\n\r\7\y\o\e\e\4\d\2\q\b\y\i\u\d\m\8\k\n\o\c\n\v\b\i\i\u\0\p\u\2\a\x\k\c\r\4\2\1\j\3\a\2\9\8\s\a\w\3\2\8\0\y\z\p\d\8\t\x\8\n\r\v\b\5\p\2\b\2\k\h\v\j\w\6\p\8\e\f\w\4\a\g\u\6\q\p\e\3\2\w\1\3\1\l\1\c\p\9\w\i\7\h\y\r\8\y\a\w\h\8\t\7\m\t\6\y\5\2\r\d\k\n\t\d\f ]] 00:43:56.311 09:10:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:43:56.311 09:10:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:43:56.569 [2024-07-12 09:10:31.509313] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:56.569 [2024-07-12 09:10:31.509806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170017 ] 00:43:56.569 [2024-07-12 09:10:31.675364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.826 [2024-07-12 09:10:31.889827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.458  Copying: 512/512 [B] (average 250 kBps) 00:43:58.458 00:43:58.458 09:10:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xbka0szf08zhwpgw46cwluxtpvwpv1lo6jijmbnll63gc3z471iu8t47ysdn2pb0tq5onqdca6802ujap5q989qrjwdmfpngikyygypwqw1grgbcygbibf5dn1nyvdn6nhxauqlm1srkyfhp0vzbtmphvwefehlvpv7jciyu7qnqnqhogirguffc4gvvkvu3b7pqnjo1ccew8oy2bh1htzjiuy6w61xbcwkjjqwrl5g8e1f0bx1k9zk6ozae56w6h26w3kb6ovqn38p9yo5g1npg6hwrcmdbdi48dk0fppjf070icvei59k00ifq7rkm9orpw4xyyfvm5gn51801tqj9hrys3yz3mxnc2faxznais8fhqrjxfvhweffvrrz32nr7yoee4d2qbyiudm8knocnvbiiu0pu2axkcr421j3a298saw3280yzpd8tx8nrvb5p2b2khvjw6p8efw4agu6qpe32w131l1cp9wi7hyr8yawh8t7mt6y52rdkntdf == \x\b\k\a\0\s\z\f\0\8\z\h\w\p\g\w\4\6\c\w\l\u\x\t\p\v\w\p\v\1\l\o\6\j\i\j\m\b\n\l\l\6\3\g\c\3\z\4\7\1\i\u\8\t\4\7\y\s\d\n\2\p\b\0\t\q\5\o\n\q\d\c\a\6\8\0\2\u\j\a\p\5\q\9\8\9\q\r\j\w\d\m\f\p\n\g\i\k\y\y\g\y\p\w\q\w\1\g\r\g\b\c\y\g\b\i\b\f\5\d\n\1\n\y\v\d\n\6\n\h\x\a\u\q\l\m\1\s\r\k\y\f\h\p\0\v\z\b\t\m\p\h\v\w\e\f\e\h\l\v\p\v\7\j\c\i\y\u\7\q\n\q\n\q\h\o\g\i\r\g\u\f\f\c\4\g\v\v\k\v\u\3\b\7\p\q\n\j\o\1\c\c\e\w\8\o\y\2\b\h\1\h\t\z\j\i\u\y\6\w\6\1\x\b\c\w\k\j\j\q\w\r\l\5\g\8\e\1\f\0\b\x\1\k\9\z\k\6\o\z\a\e\5\6\w\6\h\2\6\w\3\k\b\6\o\v\q\n\3\8\p\9\y\o\5\g\1\n\p\g\6\h\w\r\c\m\d\b\d\i\4\8\d\k\0\f\p\p\j\f\0\7\0\i\c\v\e\i\5\9\k\0\0\i\f\q\7\r\k\m\9\o\r\p\w\4\x\y\y\f\v\m\5\g\n\5\1\8\0\1\t\q\j\9\h\r\y\s\3\y\z\3\m\x\n\c\2\f\a\x\z\n\a\i\s\8\f\h\q\r\j\x\f\v\h\w\e\f\f\v\r\r\z\3\2\n\r\7\y\o\e\e\4\d\2\q\b\y\i\u\d\m\8\k\n\o\c\n\v\b\i\i\u\0\p\u\2\a\x\k\c\r\4\2\1\j\3\a\2\9\8\s\a\w\3\2\8\0\y\z\p\d\8\t\x\8\n\r\v\b\5\p\2\b\2\k\h\v\j\w\6\p\8\e\f\w\4\a\g\u\6\q\p\e\3\2\w\1\3\1\l\1\c\p\9\w\i\7\h\y\r\8\y\a\w\h\8\t\7\m\t\6\y\5\2\r\d\k\n\t\d\f ]] 00:43:58.458 09:10:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:43:58.458 09:10:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:43:58.458 [2024-07-12 09:10:33.427951] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:43:58.458 [2024-07-12 09:10:33.428409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170042 ] 00:43:58.458 [2024-07-12 09:10:33.607490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:58.716 [2024-07-12 09:10:33.831955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.393  Copying: 512/512 [B] (average 250 kBps) 00:44:00.393 00:44:00.393 09:10:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xbka0szf08zhwpgw46cwluxtpvwpv1lo6jijmbnll63gc3z471iu8t47ysdn2pb0tq5onqdca6802ujap5q989qrjwdmfpngikyygypwqw1grgbcygbibf5dn1nyvdn6nhxauqlm1srkyfhp0vzbtmphvwefehlvpv7jciyu7qnqnqhogirguffc4gvvkvu3b7pqnjo1ccew8oy2bh1htzjiuy6w61xbcwkjjqwrl5g8e1f0bx1k9zk6ozae56w6h26w3kb6ovqn38p9yo5g1npg6hwrcmdbdi48dk0fppjf070icvei59k00ifq7rkm9orpw4xyyfvm5gn51801tqj9hrys3yz3mxnc2faxznais8fhqrjxfvhweffvrrz32nr7yoee4d2qbyiudm8knocnvbiiu0pu2axkcr421j3a298saw3280yzpd8tx8nrvb5p2b2khvjw6p8efw4agu6qpe32w131l1cp9wi7hyr8yawh8t7mt6y52rdkntdf == \x\b\k\a\0\s\z\f\0\8\z\h\w\p\g\w\4\6\c\w\l\u\x\t\p\v\w\p\v\1\l\o\6\j\i\j\m\b\n\l\l\6\3\g\c\3\z\4\7\1\i\u\8\t\4\7\y\s\d\n\2\p\b\0\t\q\5\o\n\q\d\c\a\6\8\0\2\u\j\a\p\5\q\9\8\9\q\r\j\w\d\m\f\p\n\g\i\k\y\y\g\y\p\w\q\w\1\g\r\g\b\c\y\g\b\i\b\f\5\d\n\1\n\y\v\d\n\6\n\h\x\a\u\q\l\m\1\s\r\k\y\f\h\p\0\v\z\b\t\m\p\h\v\w\e\f\e\h\l\v\p\v\7\j\c\i\y\u\7\q\n\q\n\q\h\o\g\i\r\g\u\f\f\c\4\g\v\v\k\v\u\3\b\7\p\q\n\j\o\1\c\c\e\w\8\o\y\2\b\h\1\h\t\z\j\i\u\y\6\w\6\1\x\b\c\w\k\j\j\q\w\r\l\5\g\8\e\1\f\0\b\x\1\k\9\z\k\6\o\z\a\e\5\6\w\6\h\2\6\w\3\k\b\6\o\v\q\n\3\8\p\9\y\o\5\g\1\n\p\g\6\h\w\r\c\m\d\b\d\i\4\8\d\k\0\f\p\p\j\f\0\7\0\i\c\v\e\i\5\9\k\0\0\i\f\q\7\r\k\m\9\o\r\p\w\4\x\y\y\f\v\m\5\g\n\5\1\8\0\1\t\q\j\9\h\r\y\s\3\y\z\3\m\x\n\c\2\f\a\x\z\n\a\i\s\8\f\h\q\r\j\x\f\v\h\w\e\f\f\v\r\r\z\3\2\n\r\7\y\o\e\e\4\d\2\q\b\y\i\u\d\m\8\k\n\o\c\n\v\b\i\i\u\0\p\u\2\a\x\k\c\r\4\2\1\j\3\a\2\9\8\s\a\w\3\2\8\0\y\z\p\d\8\t\x\8\n\r\v\b\5\p\2\b\2\k\h\v\j\w\6\p\8\e\f\w\4\a\g\u\6\q\p\e\3\2\w\1\3\1\l\1\c\p\9\w\i\7\h\y\r\8\y\a\w\h\8\t\7\m\t\6\y\5\2\r\d\k\n\t\d\f ]] 00:44:00.393 09:10:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:44:00.393 09:10:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:44:00.393 09:10:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:44:00.393 09:10:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:44:00.393 09:10:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:00.393 09:10:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:44:00.393 [2024-07-12 09:10:35.372090] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:00.393 [2024-07-12 09:10:35.372706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170066 ] 00:44:00.393 [2024-07-12 09:10:35.537880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:00.652 [2024-07-12 09:10:35.763002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:02.285  Copying: 512/512 [B] (average 500 kBps) 00:44:02.285 00:44:02.285 09:10:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uzetgin30enp8pbj08615npv62ekqwdfqzg69jehvx23xqpbjyp9yrsbjkdvrbx0a0v5gpkl3e1i8nfz78ebqgmxc6bvmbbwk8cwv5ma7yrvbe218d2ust1tce76ehe4u6gwy4qw7friwa9gnlsp2myshguhjdkyrl7x6wfo2lmyspo0d810ozx04unxikfltpwe0p9rhmpp3bznqfbwgvxnya5yfxhwd9kgsdr4phnatu36knemik2g66h015zbk31bqpqhdff9jt2vg4vileenwjc7o8b7sj4iu4ppohvxulcssndiyivybbjcm70vat6dhytm59es49ct2410efvcmhn4qz9idi8ujen67xfysayh17a2k8pmi5mkkpniohko072pdzygbv41h08gbs2099k9acbvzgnwah3r7dp32a25zkt5p90zbeelpznesreoql3tnyuvjv71hf1llwpcj0dtu7ye5eqpv0f2nyhwdttonminrdpmlgt5i3n6 == \u\z\e\t\g\i\n\3\0\e\n\p\8\p\b\j\0\8\6\1\5\n\p\v\6\2\e\k\q\w\d\f\q\z\g\6\9\j\e\h\v\x\2\3\x\q\p\b\j\y\p\9\y\r\s\b\j\k\d\v\r\b\x\0\a\0\v\5\g\p\k\l\3\e\1\i\8\n\f\z\7\8\e\b\q\g\m\x\c\6\b\v\m\b\b\w\k\8\c\w\v\5\m\a\7\y\r\v\b\e\2\1\8\d\2\u\s\t\1\t\c\e\7\6\e\h\e\4\u\6\g\w\y\4\q\w\7\f\r\i\w\a\9\g\n\l\s\p\2\m\y\s\h\g\u\h\j\d\k\y\r\l\7\x\6\w\f\o\2\l\m\y\s\p\o\0\d\8\1\0\o\z\x\0\4\u\n\x\i\k\f\l\t\p\w\e\0\p\9\r\h\m\p\p\3\b\z\n\q\f\b\w\g\v\x\n\y\a\5\y\f\x\h\w\d\9\k\g\s\d\r\4\p\h\n\a\t\u\3\6\k\n\e\m\i\k\2\g\6\6\h\0\1\5\z\b\k\3\1\b\q\p\q\h\d\f\f\9\j\t\2\v\g\4\v\i\l\e\e\n\w\j\c\7\o\8\b\7\s\j\4\i\u\4\p\p\o\h\v\x\u\l\c\s\s\n\d\i\y\i\v\y\b\b\j\c\m\7\0\v\a\t\6\d\h\y\t\m\5\9\e\s\4\9\c\t\2\4\1\0\e\f\v\c\m\h\n\4\q\z\9\i\d\i\8\u\j\e\n\6\7\x\f\y\s\a\y\h\1\7\a\2\k\8\p\m\i\5\m\k\k\p\n\i\o\h\k\o\0\7\2\p\d\z\y\g\b\v\4\1\h\0\8\g\b\s\2\0\9\9\k\9\a\c\b\v\z\g\n\w\a\h\3\r\7\d\p\3\2\a\2\5\z\k\t\5\p\9\0\z\b\e\e\l\p\z\n\e\s\r\e\o\q\l\3\t\n\y\u\v\j\v\7\1\h\f\1\l\l\w\p\c\j\0\d\t\u\7\y\e\5\e\q\p\v\0\f\2\n\y\h\w\d\t\t\o\n\m\i\n\r\d\p\m\l\g\t\5\i\3\n\6 ]] 00:44:02.285 09:10:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:02.285 09:10:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:44:02.285 [2024-07-12 09:10:37.312333] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:02.285 [2024-07-12 09:10:37.312730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170091 ] 00:44:02.285 [2024-07-12 09:10:37.475419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:02.543 [2024-07-12 09:10:37.697431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:04.041  Copying: 512/512 [B] (average 500 kBps) 00:44:04.041 00:44:04.041 09:10:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uzetgin30enp8pbj08615npv62ekqwdfqzg69jehvx23xqpbjyp9yrsbjkdvrbx0a0v5gpkl3e1i8nfz78ebqgmxc6bvmbbwk8cwv5ma7yrvbe218d2ust1tce76ehe4u6gwy4qw7friwa9gnlsp2myshguhjdkyrl7x6wfo2lmyspo0d810ozx04unxikfltpwe0p9rhmpp3bznqfbwgvxnya5yfxhwd9kgsdr4phnatu36knemik2g66h015zbk31bqpqhdff9jt2vg4vileenwjc7o8b7sj4iu4ppohvxulcssndiyivybbjcm70vat6dhytm59es49ct2410efvcmhn4qz9idi8ujen67xfysayh17a2k8pmi5mkkpniohko072pdzygbv41h08gbs2099k9acbvzgnwah3r7dp32a25zkt5p90zbeelpznesreoql3tnyuvjv71hf1llwpcj0dtu7ye5eqpv0f2nyhwdttonminrdpmlgt5i3n6 == \u\z\e\t\g\i\n\3\0\e\n\p\8\p\b\j\0\8\6\1\5\n\p\v\6\2\e\k\q\w\d\f\q\z\g\6\9\j\e\h\v\x\2\3\x\q\p\b\j\y\p\9\y\r\s\b\j\k\d\v\r\b\x\0\a\0\v\5\g\p\k\l\3\e\1\i\8\n\f\z\7\8\e\b\q\g\m\x\c\6\b\v\m\b\b\w\k\8\c\w\v\5\m\a\7\y\r\v\b\e\2\1\8\d\2\u\s\t\1\t\c\e\7\6\e\h\e\4\u\6\g\w\y\4\q\w\7\f\r\i\w\a\9\g\n\l\s\p\2\m\y\s\h\g\u\h\j\d\k\y\r\l\7\x\6\w\f\o\2\l\m\y\s\p\o\0\d\8\1\0\o\z\x\0\4\u\n\x\i\k\f\l\t\p\w\e\0\p\9\r\h\m\p\p\3\b\z\n\q\f\b\w\g\v\x\n\y\a\5\y\f\x\h\w\d\9\k\g\s\d\r\4\p\h\n\a\t\u\3\6\k\n\e\m\i\k\2\g\6\6\h\0\1\5\z\b\k\3\1\b\q\p\q\h\d\f\f\9\j\t\2\v\g\4\v\i\l\e\e\n\w\j\c\7\o\8\b\7\s\j\4\i\u\4\p\p\o\h\v\x\u\l\c\s\s\n\d\i\y\i\v\y\b\b\j\c\m\7\0\v\a\t\6\d\h\y\t\m\5\9\e\s\4\9\c\t\2\4\1\0\e\f\v\c\m\h\n\4\q\z\9\i\d\i\8\u\j\e\n\6\7\x\f\y\s\a\y\h\1\7\a\2\k\8\p\m\i\5\m\k\k\p\n\i\o\h\k\o\0\7\2\p\d\z\y\g\b\v\4\1\h\0\8\g\b\s\2\0\9\9\k\9\a\c\b\v\z\g\n\w\a\h\3\r\7\d\p\3\2\a\2\5\z\k\t\5\p\9\0\z\b\e\e\l\p\z\n\e\s\r\e\o\q\l\3\t\n\y\u\v\j\v\7\1\h\f\1\l\l\w\p\c\j\0\d\t\u\7\y\e\5\e\q\p\v\0\f\2\n\y\h\w\d\t\t\o\n\m\i\n\r\d\p\m\l\g\t\5\i\3\n\6 ]] 00:44:04.041 09:10:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:04.041 09:10:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:44:04.298 [2024-07-12 09:10:39.243293] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:04.298 [2024-07-12 09:10:39.243786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170114 ] 00:44:04.298 [2024-07-12 09:10:39.415319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.556 [2024-07-12 09:10:39.632125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:06.187  Copying: 512/512 [B] (average 500 kBps) 00:44:06.188 00:44:06.188 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uzetgin30enp8pbj08615npv62ekqwdfqzg69jehvx23xqpbjyp9yrsbjkdvrbx0a0v5gpkl3e1i8nfz78ebqgmxc6bvmbbwk8cwv5ma7yrvbe218d2ust1tce76ehe4u6gwy4qw7friwa9gnlsp2myshguhjdkyrl7x6wfo2lmyspo0d810ozx04unxikfltpwe0p9rhmpp3bznqfbwgvxnya5yfxhwd9kgsdr4phnatu36knemik2g66h015zbk31bqpqhdff9jt2vg4vileenwjc7o8b7sj4iu4ppohvxulcssndiyivybbjcm70vat6dhytm59es49ct2410efvcmhn4qz9idi8ujen67xfysayh17a2k8pmi5mkkpniohko072pdzygbv41h08gbs2099k9acbvzgnwah3r7dp32a25zkt5p90zbeelpznesreoql3tnyuvjv71hf1llwpcj0dtu7ye5eqpv0f2nyhwdttonminrdpmlgt5i3n6 == \u\z\e\t\g\i\n\3\0\e\n\p\8\p\b\j\0\8\6\1\5\n\p\v\6\2\e\k\q\w\d\f\q\z\g\6\9\j\e\h\v\x\2\3\x\q\p\b\j\y\p\9\y\r\s\b\j\k\d\v\r\b\x\0\a\0\v\5\g\p\k\l\3\e\1\i\8\n\f\z\7\8\e\b\q\g\m\x\c\6\b\v\m\b\b\w\k\8\c\w\v\5\m\a\7\y\r\v\b\e\2\1\8\d\2\u\s\t\1\t\c\e\7\6\e\h\e\4\u\6\g\w\y\4\q\w\7\f\r\i\w\a\9\g\n\l\s\p\2\m\y\s\h\g\u\h\j\d\k\y\r\l\7\x\6\w\f\o\2\l\m\y\s\p\o\0\d\8\1\0\o\z\x\0\4\u\n\x\i\k\f\l\t\p\w\e\0\p\9\r\h\m\p\p\3\b\z\n\q\f\b\w\g\v\x\n\y\a\5\y\f\x\h\w\d\9\k\g\s\d\r\4\p\h\n\a\t\u\3\6\k\n\e\m\i\k\2\g\6\6\h\0\1\5\z\b\k\3\1\b\q\p\q\h\d\f\f\9\j\t\2\v\g\4\v\i\l\e\e\n\w\j\c\7\o\8\b\7\s\j\4\i\u\4\p\p\o\h\v\x\u\l\c\s\s\n\d\i\y\i\v\y\b\b\j\c\m\7\0\v\a\t\6\d\h\y\t\m\5\9\e\s\4\9\c\t\2\4\1\0\e\f\v\c\m\h\n\4\q\z\9\i\d\i\8\u\j\e\n\6\7\x\f\y\s\a\y\h\1\7\a\2\k\8\p\m\i\5\m\k\k\p\n\i\o\h\k\o\0\7\2\p\d\z\y\g\b\v\4\1\h\0\8\g\b\s\2\0\9\9\k\9\a\c\b\v\z\g\n\w\a\h\3\r\7\d\p\3\2\a\2\5\z\k\t\5\p\9\0\z\b\e\e\l\p\z\n\e\s\r\e\o\q\l\3\t\n\y\u\v\j\v\7\1\h\f\1\l\l\w\p\c\j\0\d\t\u\7\y\e\5\e\q\p\v\0\f\2\n\y\h\w\d\t\t\o\n\m\i\n\r\d\p\m\l\g\t\5\i\3\n\6 ]] 00:44:06.188 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:06.188 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:44:06.188 [2024-07-12 09:10:41.178443] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:06.188 [2024-07-12 09:10:41.178890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170154 ] 00:44:06.188 [2024-07-12 09:10:41.351698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:06.446 [2024-07-12 09:10:41.573957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:07.957  Copying: 512/512 [B] (average 250 kBps) 00:44:07.957 00:44:07.957 ************************************ 00:44:07.957 END TEST dd_flags_misc 00:44:07.957 ************************************ 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ uzetgin30enp8pbj08615npv62ekqwdfqzg69jehvx23xqpbjyp9yrsbjkdvrbx0a0v5gpkl3e1i8nfz78ebqgmxc6bvmbbwk8cwv5ma7yrvbe218d2ust1tce76ehe4u6gwy4qw7friwa9gnlsp2myshguhjdkyrl7x6wfo2lmyspo0d810ozx04unxikfltpwe0p9rhmpp3bznqfbwgvxnya5yfxhwd9kgsdr4phnatu36knemik2g66h015zbk31bqpqhdff9jt2vg4vileenwjc7o8b7sj4iu4ppohvxulcssndiyivybbjcm70vat6dhytm59es49ct2410efvcmhn4qz9idi8ujen67xfysayh17a2k8pmi5mkkpniohko072pdzygbv41h08gbs2099k9acbvzgnwah3r7dp32a25zkt5p90zbeelpznesreoql3tnyuvjv71hf1llwpcj0dtu7ye5eqpv0f2nyhwdttonminrdpmlgt5i3n6 == \u\z\e\t\g\i\n\3\0\e\n\p\8\p\b\j\0\8\6\1\5\n\p\v\6\2\e\k\q\w\d\f\q\z\g\6\9\j\e\h\v\x\2\3\x\q\p\b\j\y\p\9\y\r\s\b\j\k\d\v\r\b\x\0\a\0\v\5\g\p\k\l\3\e\1\i\8\n\f\z\7\8\e\b\q\g\m\x\c\6\b\v\m\b\b\w\k\8\c\w\v\5\m\a\7\y\r\v\b\e\2\1\8\d\2\u\s\t\1\t\c\e\7\6\e\h\e\4\u\6\g\w\y\4\q\w\7\f\r\i\w\a\9\g\n\l\s\p\2\m\y\s\h\g\u\h\j\d\k\y\r\l\7\x\6\w\f\o\2\l\m\y\s\p\o\0\d\8\1\0\o\z\x\0\4\u\n\x\i\k\f\l\t\p\w\e\0\p\9\r\h\m\p\p\3\b\z\n\q\f\b\w\g\v\x\n\y\a\5\y\f\x\h\w\d\9\k\g\s\d\r\4\p\h\n\a\t\u\3\6\k\n\e\m\i\k\2\g\6\6\h\0\1\5\z\b\k\3\1\b\q\p\q\h\d\f\f\9\j\t\2\v\g\4\v\i\l\e\e\n\w\j\c\7\o\8\b\7\s\j\4\i\u\4\p\p\o\h\v\x\u\l\c\s\s\n\d\i\y\i\v\y\b\b\j\c\m\7\0\v\a\t\6\d\h\y\t\m\5\9\e\s\4\9\c\t\2\4\1\0\e\f\v\c\m\h\n\4\q\z\9\i\d\i\8\u\j\e\n\6\7\x\f\y\s\a\y\h\1\7\a\2\k\8\p\m\i\5\m\k\k\p\n\i\o\h\k\o\0\7\2\p\d\z\y\g\b\v\4\1\h\0\8\g\b\s\2\0\9\9\k\9\a\c\b\v\z\g\n\w\a\h\3\r\7\d\p\3\2\a\2\5\z\k\t\5\p\9\0\z\b\e\e\l\p\z\n\e\s\r\e\o\q\l\3\t\n\y\u\v\j\v\7\1\h\f\1\l\l\w\p\c\j\0\d\t\u\7\y\e\5\e\q\p\v\0\f\2\n\y\h\w\d\t\t\o\n\m\i\n\r\d\p\m\l\g\t\5\i\3\n\6 ]] 00:44:07.957 00:44:07.957 real 0m15.651s 00:44:07.957 user 0m12.635s 00:44:07.957 sys 0m1.927s 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:44:07.957 * Second test run, using AIO 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:07.957 ************************************ 00:44:07.957 START TEST dd_flag_append_forced_aio 00:44:07.957 ************************************ 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=4aebfgkghcbibu7soe9avapzyq6iy9gf 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=02aewyrcjpwtqa4corppxo4zyh3b4p34 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 4aebfgkghcbibu7soe9avapzyq6iy9gf 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 02aewyrcjpwtqa4corppxo4zyh3b4p34 00:44:07.957 09:10:43 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:44:08.215 [2024-07-12 09:10:43.195856] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:08.215 [2024-07-12 09:10:43.196045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170200 ] 00:44:08.215 [2024-07-12 09:10:43.359417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.473 [2024-07-12 09:10:43.572939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:10.113  Copying: 32/32 [B] (average 31 kBps) 00:44:10.113 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 02aewyrcjpwtqa4corppxo4zyh3b4p344aebfgkghcbibu7soe9avapzyq6iy9gf == \0\2\a\e\w\y\r\c\j\p\w\t\q\a\4\c\o\r\p\p\x\o\4\z\y\h\3\b\4\p\3\4\4\a\e\b\f\g\k\g\h\c\b\i\b\u\7\s\o\e\9\a\v\a\p\z\y\q\6\i\y\9\g\f ]] 00:44:10.113 00:44:10.113 real 0m1.967s 00:44:10.113 user 0m1.582s 00:44:10.113 sys 0m0.256s 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:10.113 ************************************ 00:44:10.113 END TEST dd_flag_append_forced_aio 00:44:10.113 ************************************ 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:10.113 ************************************ 00:44:10.113 START TEST dd_flag_directory_forced_aio 00:44:10.113 ************************************ 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:10.113 09:10:45 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:10.113 [2024-07-12 09:10:45.206469] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:10.113 [2024-07-12 09:10:45.206668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170248 ] 00:44:10.372 [2024-07-12 09:10:45.366973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:10.631 [2024-07-12 09:10:45.607853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:10.889 [2024-07-12 09:10:45.909578] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:10.889 [2024-07-12 09:10:45.909705] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:10.889 [2024-07-12 09:10:45.909740] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:11.455 [2024-07-12 09:10:46.632933] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:12.022 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:12.023 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:12.023 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:12.023 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:12.023 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:12.023 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:12.023 09:10:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:44:12.023 [2024-07-12 09:10:47.111184] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:12.023 [2024-07-12 09:10:47.111416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170276 ] 00:44:12.281 [2024-07-12 09:10:47.281470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:12.539 [2024-07-12 09:10:47.499156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:12.798 [2024-07-12 09:10:47.800982] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:12.798 [2024-07-12 09:10:47.801131] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:44:12.798 [2024-07-12 09:10:47.801172] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:13.364 [2024-07-12 09:10:48.538129] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:13.930 00:44:13.930 real 0m3.802s 00:44:13.930 user 0m3.123s 00:44:13.930 sys 0m0.480s 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:13.930 ************************************ 00:44:13.930 END TEST dd_flag_directory_forced_aio 00:44:13.930 ************************************ 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:13.930 09:10:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:13.930 ************************************ 00:44:13.930 START TEST dd_flag_nofollow_forced_aio 00:44:13.930 ************************************ 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:13.930 09:10:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:13.930 [2024-07-12 09:10:49.069989] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:13.930 [2024-07-12 09:10:49.071367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170321 ] 00:44:14.188 [2024-07-12 09:10:49.234963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:14.446 [2024-07-12 09:10:49.452536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:14.704 [2024-07-12 09:10:49.760370] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:44:14.704 [2024-07-12 09:10:49.760723] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:44:14.704 [2024-07-12 09:10:49.760868] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:15.638 [2024-07-12 09:10:50.497057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:44:15.895 09:10:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:44:15.895 [2024-07-12 09:10:50.991986] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:15.895 [2024-07-12 09:10:50.992471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170369 ] 00:44:16.153 [2024-07-12 09:10:51.167017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:16.411 [2024-07-12 09:10:51.388069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:16.669 [2024-07-12 09:10:51.705941] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:44:16.669 [2024-07-12 09:10:51.706292] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:44:16.669 [2024-07-12 09:10:51.706363] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:17.602 [2024-07-12 09:10:52.441708] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:17.859 09:10:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:17.859 [2024-07-12 09:10:52.933551] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:17.859 [2024-07-12 09:10:52.934501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170391 ] 00:44:18.117 [2024-07-12 09:10:53.108450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:18.374 [2024-07-12 09:10:53.332800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:20.017  Copying: 512/512 [B] (average 500 kBps) 00:44:20.017 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 68q4ovl7uaanwnqnkjzv2rk4rbant89dgsj7az6f8ij1gushu6pxds2y85rsttkc772fgwcwj9ak3gn0cryjh7oj9jihvpb4o4jtjcx0fsnjiwelkoexcchofk9l4xm85i4xk4x6e4ajpdtny4rmvjzk6rhqd1ngscsbypn3dqun14m45od6pr4g1sodg6kmdsbxtbzlycljz9kcvfk8cd92qsz1k5jehhgrn580ibqfgf0q7no0nhg0h6wjqr0q3g5aypo95h4wu39lnqqvyenot47isjeq63bj95lt4jo7xjt75ywzzz3pvj8fgz68elkr23ycdgmewty37afeaqyu2sbux9biyqbh0205c3eazf10t19o84n3ev0yxedhww865z9mw3xvcgy95023ndyr2i5b1vlh2q6dvnfof4hyab8tuy1aj6qv64cjgkvbo0zqpggipomeulq8panfk5qp89z8baijq5lcabisjcechlk3s1y0zoh16crih7gc == \6\8\q\4\o\v\l\7\u\a\a\n\w\n\q\n\k\j\z\v\2\r\k\4\r\b\a\n\t\8\9\d\g\s\j\7\a\z\6\f\8\i\j\1\g\u\s\h\u\6\p\x\d\s\2\y\8\5\r\s\t\t\k\c\7\7\2\f\g\w\c\w\j\9\a\k\3\g\n\0\c\r\y\j\h\7\o\j\9\j\i\h\v\p\b\4\o\4\j\t\j\c\x\0\f\s\n\j\i\w\e\l\k\o\e\x\c\c\h\o\f\k\9\l\4\x\m\8\5\i\4\x\k\4\x\6\e\4\a\j\p\d\t\n\y\4\r\m\v\j\z\k\6\r\h\q\d\1\n\g\s\c\s\b\y\p\n\3\d\q\u\n\1\4\m\4\5\o\d\6\p\r\4\g\1\s\o\d\g\6\k\m\d\s\b\x\t\b\z\l\y\c\l\j\z\9\k\c\v\f\k\8\c\d\9\2\q\s\z\1\k\5\j\e\h\h\g\r\n\5\8\0\i\b\q\f\g\f\0\q\7\n\o\0\n\h\g\0\h\6\w\j\q\r\0\q\3\g\5\a\y\p\o\9\5\h\4\w\u\3\9\l\n\q\q\v\y\e\n\o\t\4\7\i\s\j\e\q\6\3\b\j\9\5\l\t\4\j\o\7\x\j\t\7\5\y\w\z\z\z\3\p\v\j\8\f\g\z\6\8\e\l\k\r\2\3\y\c\d\g\m\e\w\t\y\3\7\a\f\e\a\q\y\u\2\s\b\u\x\9\b\i\y\q\b\h\0\2\0\5\c\3\e\a\z\f\1\0\t\1\9\o\8\4\n\3\e\v\0\y\x\e\d\h\w\w\8\6\5\z\9\m\w\3\x\v\c\g\y\9\5\0\2\3\n\d\y\r\2\i\5\b\1\v\l\h\2\q\6\d\v\n\f\o\f\4\h\y\a\b\8\t\u\y\1\a\j\6\q\v\6\4\c\j\g\k\v\b\o\0\z\q\p\g\g\i\p\o\m\e\u\l\q\8\p\a\n\f\k\5\q\p\8\9\z\8\b\a\i\j\q\5\l\c\a\b\i\s\j\c\e\c\h\l\k\3\s\1\y\0\z\o\h\1\6\c\r\i\h\7\g\c ]] 00:44:20.017 ************************************ 00:44:20.017 END TEST dd_flag_nofollow_forced_aio 00:44:20.017 ************************************ 00:44:20.017 00:44:20.017 real 0m5.852s 00:44:20.017 user 0m4.841s 00:44:20.017 sys 0m0.666s 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:20.017 ************************************ 00:44:20.017 START TEST dd_flag_noatime_forced_aio 00:44:20.017 ************************************ 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720775453 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720775454 00:44:20.017 09:10:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:44:20.949 09:10:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:20.949 [2024-07-12 09:10:55.999526] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:20.950 [2024-07-12 09:10:56.000545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170455 ] 00:44:21.207 [2024-07-12 09:10:56.179299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:21.464 [2024-07-12 09:10:56.448696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:23.094  Copying: 512/512 [B] (average 500 kBps) 00:44:23.094 00:44:23.094 09:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:23.094 09:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720775453 )) 00:44:23.094 09:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:23.094 09:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720775454 )) 00:44:23.094 09:10:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:44:23.094 [2024-07-12 09:10:58.101212] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:23.094 [2024-07-12 09:10:58.101392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170482 ] 00:44:23.094 [2024-07-12 09:10:58.275714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:23.351 [2024-07-12 09:10:58.506306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:24.906  Copying: 512/512 [B] (average 500 kBps) 00:44:24.906 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720775458 )) 00:44:24.907 00:44:24.907 real 0m5.127s 00:44:24.907 user 0m3.352s 00:44:24.907 sys 0m0.485s 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:24.907 ************************************ 00:44:24.907 END TEST dd_flag_noatime_forced_aio 00:44:24.907 ************************************ 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:24.907 ************************************ 00:44:24.907 START TEST dd_flags_misc_forced_aio 00:44:24.907 ************************************ 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:24.907 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:25.164 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:25.164 09:11:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:44:25.164 [2024-07-12 09:11:00.166130] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:25.164 [2024-07-12 09:11:00.166370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170525 ] 00:44:25.164 [2024-07-12 09:11:00.342339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:25.422 [2024-07-12 09:11:00.605869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.362  Copying: 512/512 [B] (average 500 kBps) 00:44:27.362 00:44:27.363 09:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rgeimo9nfuztzl4ej0o3ndgkkmet7thcpfwkalikhvasrn7l5vjvf5hk9zu2ko4m53zwheatsy7agcf8goc2s0m44nb1rrsw5yaudued1tgjtkyug18t9m1cqvsesbp8cbp9shrjrvvc69c3ki8iqzukut6t85toycxyjqptva2c39ngvg1fq5lmkepn0czgk1rbl0hvh3jtfrn3nlpac464t1c2ki908fdptuopfdeauqmob9myvgm7v6r6lyvz1w6p2css8le5nh7jy6x1prnul8ymhpf95x56k1wzl99tdgbzcah07o6vvk37nppd8k9dlb6owqlqmxmcs258l4x2rr7t05v65tcnxj2g3p3s4ks2mf2kdvkz7vt75h5rx7jmg33ro3iye8t13dpyphplztdsxokbjs5yske4mbfkt57r1bn18i2bcae052dw3oys9ahdtj4z8ot7v0gn8sjufqtuhaic17sy2cqkm49vb5pit3zyvfk37u89rqx0 == \r\g\e\i\m\o\9\n\f\u\z\t\z\l\4\e\j\0\o\3\n\d\g\k\k\m\e\t\7\t\h\c\p\f\w\k\a\l\i\k\h\v\a\s\r\n\7\l\5\v\j\v\f\5\h\k\9\z\u\2\k\o\4\m\5\3\z\w\h\e\a\t\s\y\7\a\g\c\f\8\g\o\c\2\s\0\m\4\4\n\b\1\r\r\s\w\5\y\a\u\d\u\e\d\1\t\g\j\t\k\y\u\g\1\8\t\9\m\1\c\q\v\s\e\s\b\p\8\c\b\p\9\s\h\r\j\r\v\v\c\6\9\c\3\k\i\8\i\q\z\u\k\u\t\6\t\8\5\t\o\y\c\x\y\j\q\p\t\v\a\2\c\3\9\n\g\v\g\1\f\q\5\l\m\k\e\p\n\0\c\z\g\k\1\r\b\l\0\h\v\h\3\j\t\f\r\n\3\n\l\p\a\c\4\6\4\t\1\c\2\k\i\9\0\8\f\d\p\t\u\o\p\f\d\e\a\u\q\m\o\b\9\m\y\v\g\m\7\v\6\r\6\l\y\v\z\1\w\6\p\2\c\s\s\8\l\e\5\n\h\7\j\y\6\x\1\p\r\n\u\l\8\y\m\h\p\f\9\5\x\5\6\k\1\w\z\l\9\9\t\d\g\b\z\c\a\h\0\7\o\6\v\v\k\3\7\n\p\p\d\8\k\9\d\l\b\6\o\w\q\l\q\m\x\m\c\s\2\5\8\l\4\x\2\r\r\7\t\0\5\v\6\5\t\c\n\x\j\2\g\3\p\3\s\4\k\s\2\m\f\2\k\d\v\k\z\7\v\t\7\5\h\5\r\x\7\j\m\g\3\3\r\o\3\i\y\e\8\t\1\3\d\p\y\p\h\p\l\z\t\d\s\x\o\k\b\j\s\5\y\s\k\e\4\m\b\f\k\t\5\7\r\1\b\n\1\8\i\2\b\c\a\e\0\5\2\d\w\3\o\y\s\9\a\h\d\t\j\4\z\8\o\t\7\v\0\g\n\8\s\j\u\f\q\t\u\h\a\i\c\1\7\s\y\2\c\q\k\m\4\9\v\b\5\p\i\t\3\z\y\v\f\k\3\7\u\8\9\r\q\x\0 ]] 00:44:27.363 09:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:27.363 09:11:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:44:27.363 [2024-07-12 09:11:02.243463] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:27.363 [2024-07-12 09:11:02.244320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170571 ] 00:44:27.363 [2024-07-12 09:11:02.412682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.622 [2024-07-12 09:11:02.667461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:29.282  Copying: 512/512 [B] (average 500 kBps) 00:44:29.282 00:44:29.282 09:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rgeimo9nfuztzl4ej0o3ndgkkmet7thcpfwkalikhvasrn7l5vjvf5hk9zu2ko4m53zwheatsy7agcf8goc2s0m44nb1rrsw5yaudued1tgjtkyug18t9m1cqvsesbp8cbp9shrjrvvc69c3ki8iqzukut6t85toycxyjqptva2c39ngvg1fq5lmkepn0czgk1rbl0hvh3jtfrn3nlpac464t1c2ki908fdptuopfdeauqmob9myvgm7v6r6lyvz1w6p2css8le5nh7jy6x1prnul8ymhpf95x56k1wzl99tdgbzcah07o6vvk37nppd8k9dlb6owqlqmxmcs258l4x2rr7t05v65tcnxj2g3p3s4ks2mf2kdvkz7vt75h5rx7jmg33ro3iye8t13dpyphplztdsxokbjs5yske4mbfkt57r1bn18i2bcae052dw3oys9ahdtj4z8ot7v0gn8sjufqtuhaic17sy2cqkm49vb5pit3zyvfk37u89rqx0 == \r\g\e\i\m\o\9\n\f\u\z\t\z\l\4\e\j\0\o\3\n\d\g\k\k\m\e\t\7\t\h\c\p\f\w\k\a\l\i\k\h\v\a\s\r\n\7\l\5\v\j\v\f\5\h\k\9\z\u\2\k\o\4\m\5\3\z\w\h\e\a\t\s\y\7\a\g\c\f\8\g\o\c\2\s\0\m\4\4\n\b\1\r\r\s\w\5\y\a\u\d\u\e\d\1\t\g\j\t\k\y\u\g\1\8\t\9\m\1\c\q\v\s\e\s\b\p\8\c\b\p\9\s\h\r\j\r\v\v\c\6\9\c\3\k\i\8\i\q\z\u\k\u\t\6\t\8\5\t\o\y\c\x\y\j\q\p\t\v\a\2\c\3\9\n\g\v\g\1\f\q\5\l\m\k\e\p\n\0\c\z\g\k\1\r\b\l\0\h\v\h\3\j\t\f\r\n\3\n\l\p\a\c\4\6\4\t\1\c\2\k\i\9\0\8\f\d\p\t\u\o\p\f\d\e\a\u\q\m\o\b\9\m\y\v\g\m\7\v\6\r\6\l\y\v\z\1\w\6\p\2\c\s\s\8\l\e\5\n\h\7\j\y\6\x\1\p\r\n\u\l\8\y\m\h\p\f\9\5\x\5\6\k\1\w\z\l\9\9\t\d\g\b\z\c\a\h\0\7\o\6\v\v\k\3\7\n\p\p\d\8\k\9\d\l\b\6\o\w\q\l\q\m\x\m\c\s\2\5\8\l\4\x\2\r\r\7\t\0\5\v\6\5\t\c\n\x\j\2\g\3\p\3\s\4\k\s\2\m\f\2\k\d\v\k\z\7\v\t\7\5\h\5\r\x\7\j\m\g\3\3\r\o\3\i\y\e\8\t\1\3\d\p\y\p\h\p\l\z\t\d\s\x\o\k\b\j\s\5\y\s\k\e\4\m\b\f\k\t\5\7\r\1\b\n\1\8\i\2\b\c\a\e\0\5\2\d\w\3\o\y\s\9\a\h\d\t\j\4\z\8\o\t\7\v\0\g\n\8\s\j\u\f\q\t\u\h\a\i\c\1\7\s\y\2\c\q\k\m\4\9\v\b\5\p\i\t\3\z\y\v\f\k\3\7\u\8\9\r\q\x\0 ]] 00:44:29.282 09:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:29.282 09:11:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:44:29.282 [2024-07-12 09:11:04.314280] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:29.282 [2024-07-12 09:11:04.314526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170599 ] 00:44:29.541 [2024-07-12 09:11:04.486760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:29.799 [2024-07-12 09:11:04.753552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.431  Copying: 512/512 [B] (average 250 kBps) 00:44:31.431 00:44:31.431 09:11:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rgeimo9nfuztzl4ej0o3ndgkkmet7thcpfwkalikhvasrn7l5vjvf5hk9zu2ko4m53zwheatsy7agcf8goc2s0m44nb1rrsw5yaudued1tgjtkyug18t9m1cqvsesbp8cbp9shrjrvvc69c3ki8iqzukut6t85toycxyjqptva2c39ngvg1fq5lmkepn0czgk1rbl0hvh3jtfrn3nlpac464t1c2ki908fdptuopfdeauqmob9myvgm7v6r6lyvz1w6p2css8le5nh7jy6x1prnul8ymhpf95x56k1wzl99tdgbzcah07o6vvk37nppd8k9dlb6owqlqmxmcs258l4x2rr7t05v65tcnxj2g3p3s4ks2mf2kdvkz7vt75h5rx7jmg33ro3iye8t13dpyphplztdsxokbjs5yske4mbfkt57r1bn18i2bcae052dw3oys9ahdtj4z8ot7v0gn8sjufqtuhaic17sy2cqkm49vb5pit3zyvfk37u89rqx0 == \r\g\e\i\m\o\9\n\f\u\z\t\z\l\4\e\j\0\o\3\n\d\g\k\k\m\e\t\7\t\h\c\p\f\w\k\a\l\i\k\h\v\a\s\r\n\7\l\5\v\j\v\f\5\h\k\9\z\u\2\k\o\4\m\5\3\z\w\h\e\a\t\s\y\7\a\g\c\f\8\g\o\c\2\s\0\m\4\4\n\b\1\r\r\s\w\5\y\a\u\d\u\e\d\1\t\g\j\t\k\y\u\g\1\8\t\9\m\1\c\q\v\s\e\s\b\p\8\c\b\p\9\s\h\r\j\r\v\v\c\6\9\c\3\k\i\8\i\q\z\u\k\u\t\6\t\8\5\t\o\y\c\x\y\j\q\p\t\v\a\2\c\3\9\n\g\v\g\1\f\q\5\l\m\k\e\p\n\0\c\z\g\k\1\r\b\l\0\h\v\h\3\j\t\f\r\n\3\n\l\p\a\c\4\6\4\t\1\c\2\k\i\9\0\8\f\d\p\t\u\o\p\f\d\e\a\u\q\m\o\b\9\m\y\v\g\m\7\v\6\r\6\l\y\v\z\1\w\6\p\2\c\s\s\8\l\e\5\n\h\7\j\y\6\x\1\p\r\n\u\l\8\y\m\h\p\f\9\5\x\5\6\k\1\w\z\l\9\9\t\d\g\b\z\c\a\h\0\7\o\6\v\v\k\3\7\n\p\p\d\8\k\9\d\l\b\6\o\w\q\l\q\m\x\m\c\s\2\5\8\l\4\x\2\r\r\7\t\0\5\v\6\5\t\c\n\x\j\2\g\3\p\3\s\4\k\s\2\m\f\2\k\d\v\k\z\7\v\t\7\5\h\5\r\x\7\j\m\g\3\3\r\o\3\i\y\e\8\t\1\3\d\p\y\p\h\p\l\z\t\d\s\x\o\k\b\j\s\5\y\s\k\e\4\m\b\f\k\t\5\7\r\1\b\n\1\8\i\2\b\c\a\e\0\5\2\d\w\3\o\y\s\9\a\h\d\t\j\4\z\8\o\t\7\v\0\g\n\8\s\j\u\f\q\t\u\h\a\i\c\1\7\s\y\2\c\q\k\m\4\9\v\b\5\p\i\t\3\z\y\v\f\k\3\7\u\8\9\r\q\x\0 ]] 00:44:31.431 09:11:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:31.431 09:11:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:44:31.431 [2024-07-12 09:11:06.476578] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:31.431 [2024-07-12 09:11:06.476892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170624 ] 00:44:31.689 [2024-07-12 09:11:06.650631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:31.946 [2024-07-12 09:11:06.925512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:33.575  Copying: 512/512 [B] (average 250 kBps) 00:44:33.575 00:44:33.575 09:11:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rgeimo9nfuztzl4ej0o3ndgkkmet7thcpfwkalikhvasrn7l5vjvf5hk9zu2ko4m53zwheatsy7agcf8goc2s0m44nb1rrsw5yaudued1tgjtkyug18t9m1cqvsesbp8cbp9shrjrvvc69c3ki8iqzukut6t85toycxyjqptva2c39ngvg1fq5lmkepn0czgk1rbl0hvh3jtfrn3nlpac464t1c2ki908fdptuopfdeauqmob9myvgm7v6r6lyvz1w6p2css8le5nh7jy6x1prnul8ymhpf95x56k1wzl99tdgbzcah07o6vvk37nppd8k9dlb6owqlqmxmcs258l4x2rr7t05v65tcnxj2g3p3s4ks2mf2kdvkz7vt75h5rx7jmg33ro3iye8t13dpyphplztdsxokbjs5yske4mbfkt57r1bn18i2bcae052dw3oys9ahdtj4z8ot7v0gn8sjufqtuhaic17sy2cqkm49vb5pit3zyvfk37u89rqx0 == \r\g\e\i\m\o\9\n\f\u\z\t\z\l\4\e\j\0\o\3\n\d\g\k\k\m\e\t\7\t\h\c\p\f\w\k\a\l\i\k\h\v\a\s\r\n\7\l\5\v\j\v\f\5\h\k\9\z\u\2\k\o\4\m\5\3\z\w\h\e\a\t\s\y\7\a\g\c\f\8\g\o\c\2\s\0\m\4\4\n\b\1\r\r\s\w\5\y\a\u\d\u\e\d\1\t\g\j\t\k\y\u\g\1\8\t\9\m\1\c\q\v\s\e\s\b\p\8\c\b\p\9\s\h\r\j\r\v\v\c\6\9\c\3\k\i\8\i\q\z\u\k\u\t\6\t\8\5\t\o\y\c\x\y\j\q\p\t\v\a\2\c\3\9\n\g\v\g\1\f\q\5\l\m\k\e\p\n\0\c\z\g\k\1\r\b\l\0\h\v\h\3\j\t\f\r\n\3\n\l\p\a\c\4\6\4\t\1\c\2\k\i\9\0\8\f\d\p\t\u\o\p\f\d\e\a\u\q\m\o\b\9\m\y\v\g\m\7\v\6\r\6\l\y\v\z\1\w\6\p\2\c\s\s\8\l\e\5\n\h\7\j\y\6\x\1\p\r\n\u\l\8\y\m\h\p\f\9\5\x\5\6\k\1\w\z\l\9\9\t\d\g\b\z\c\a\h\0\7\o\6\v\v\k\3\7\n\p\p\d\8\k\9\d\l\b\6\o\w\q\l\q\m\x\m\c\s\2\5\8\l\4\x\2\r\r\7\t\0\5\v\6\5\t\c\n\x\j\2\g\3\p\3\s\4\k\s\2\m\f\2\k\d\v\k\z\7\v\t\7\5\h\5\r\x\7\j\m\g\3\3\r\o\3\i\y\e\8\t\1\3\d\p\y\p\h\p\l\z\t\d\s\x\o\k\b\j\s\5\y\s\k\e\4\m\b\f\k\t\5\7\r\1\b\n\1\8\i\2\b\c\a\e\0\5\2\d\w\3\o\y\s\9\a\h\d\t\j\4\z\8\o\t\7\v\0\g\n\8\s\j\u\f\q\t\u\h\a\i\c\1\7\s\y\2\c\q\k\m\4\9\v\b\5\p\i\t\3\z\y\v\f\k\3\7\u\8\9\r\q\x\0 ]] 00:44:33.575 09:11:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:44:33.575 09:11:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:44:33.575 09:11:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:44:33.575 09:11:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:33.575 09:11:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:33.575 09:11:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:44:33.575 [2024-07-12 09:11:08.550176] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:33.575 [2024-07-12 09:11:08.550400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170648 ] 00:44:33.575 [2024-07-12 09:11:08.722772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:33.833 [2024-07-12 09:11:08.943471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:35.461  Copying: 512/512 [B] (average 500 kBps) 00:44:35.461 00:44:35.462 09:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hebw19mqwlsnuuz9inz9ez5hlrn3t57etllyjb4mmwux4sn8s2t7ear9x6f2aojuhttt8y2eosp88dtt2ajap5y117jlwn98s852bi64ui2sy35y1fxcq79cobn1d1oa1r7u2k0ckmndvf04k365pyek4bvmal89mp4rbxkjqh4hqex0t8rvps7ssgrs7dgatdpk7b8zue6408ymrbj8vttmw84du9vwrv9zx6nrz3gagjm9tv9c7cblotes596hk95grmbej9qbmmg9dpggfjq5rreyxclri787dq90lfuypva1b99kz8kx8a35zs9tb6blbctqs5b41lfu23lv276c5o13ggumbyq7pebasy0vq2bclzelp3ttojn7v37tpkuf13kzjhixm6r481ww8npecckwbhotxpwp57xy9idibh7jfnqljbk5fq3c0kgjjcm2ivubo0amrzwrmdzk57qlffa3f0xsltnz49csg3x8fucekboct8sb09rfoukj == \h\e\b\w\1\9\m\q\w\l\s\n\u\u\z\9\i\n\z\9\e\z\5\h\l\r\n\3\t\5\7\e\t\l\l\y\j\b\4\m\m\w\u\x\4\s\n\8\s\2\t\7\e\a\r\9\x\6\f\2\a\o\j\u\h\t\t\t\8\y\2\e\o\s\p\8\8\d\t\t\2\a\j\a\p\5\y\1\1\7\j\l\w\n\9\8\s\8\5\2\b\i\6\4\u\i\2\s\y\3\5\y\1\f\x\c\q\7\9\c\o\b\n\1\d\1\o\a\1\r\7\u\2\k\0\c\k\m\n\d\v\f\0\4\k\3\6\5\p\y\e\k\4\b\v\m\a\l\8\9\m\p\4\r\b\x\k\j\q\h\4\h\q\e\x\0\t\8\r\v\p\s\7\s\s\g\r\s\7\d\g\a\t\d\p\k\7\b\8\z\u\e\6\4\0\8\y\m\r\b\j\8\v\t\t\m\w\8\4\d\u\9\v\w\r\v\9\z\x\6\n\r\z\3\g\a\g\j\m\9\t\v\9\c\7\c\b\l\o\t\e\s\5\9\6\h\k\9\5\g\r\m\b\e\j\9\q\b\m\m\g\9\d\p\g\g\f\j\q\5\r\r\e\y\x\c\l\r\i\7\8\7\d\q\9\0\l\f\u\y\p\v\a\1\b\9\9\k\z\8\k\x\8\a\3\5\z\s\9\t\b\6\b\l\b\c\t\q\s\5\b\4\1\l\f\u\2\3\l\v\2\7\6\c\5\o\1\3\g\g\u\m\b\y\q\7\p\e\b\a\s\y\0\v\q\2\b\c\l\z\e\l\p\3\t\t\o\j\n\7\v\3\7\t\p\k\u\f\1\3\k\z\j\h\i\x\m\6\r\4\8\1\w\w\8\n\p\e\c\c\k\w\b\h\o\t\x\p\w\p\5\7\x\y\9\i\d\i\b\h\7\j\f\n\q\l\j\b\k\5\f\q\3\c\0\k\g\j\j\c\m\2\i\v\u\b\o\0\a\m\r\z\w\r\m\d\z\k\5\7\q\l\f\f\a\3\f\0\x\s\l\t\n\z\4\9\c\s\g\3\x\8\f\u\c\e\k\b\o\c\t\8\s\b\0\9\r\f\o\u\k\j ]] 00:44:35.462 09:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:35.462 09:11:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:44:35.462 [2024-07-12 09:11:10.587229] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:35.462 [2024-07-12 09:11:10.587471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170673 ] 00:44:35.720 [2024-07-12 09:11:10.763273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.977 [2024-07-12 09:11:10.982016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:37.605  Copying: 512/512 [B] (average 500 kBps) 00:44:37.605 00:44:37.605 09:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hebw19mqwlsnuuz9inz9ez5hlrn3t57etllyjb4mmwux4sn8s2t7ear9x6f2aojuhttt8y2eosp88dtt2ajap5y117jlwn98s852bi64ui2sy35y1fxcq79cobn1d1oa1r7u2k0ckmndvf04k365pyek4bvmal89mp4rbxkjqh4hqex0t8rvps7ssgrs7dgatdpk7b8zue6408ymrbj8vttmw84du9vwrv9zx6nrz3gagjm9tv9c7cblotes596hk95grmbej9qbmmg9dpggfjq5rreyxclri787dq90lfuypva1b99kz8kx8a35zs9tb6blbctqs5b41lfu23lv276c5o13ggumbyq7pebasy0vq2bclzelp3ttojn7v37tpkuf13kzjhixm6r481ww8npecckwbhotxpwp57xy9idibh7jfnqljbk5fq3c0kgjjcm2ivubo0amrzwrmdzk57qlffa3f0xsltnz49csg3x8fucekboct8sb09rfoukj == \h\e\b\w\1\9\m\q\w\l\s\n\u\u\z\9\i\n\z\9\e\z\5\h\l\r\n\3\t\5\7\e\t\l\l\y\j\b\4\m\m\w\u\x\4\s\n\8\s\2\t\7\e\a\r\9\x\6\f\2\a\o\j\u\h\t\t\t\8\y\2\e\o\s\p\8\8\d\t\t\2\a\j\a\p\5\y\1\1\7\j\l\w\n\9\8\s\8\5\2\b\i\6\4\u\i\2\s\y\3\5\y\1\f\x\c\q\7\9\c\o\b\n\1\d\1\o\a\1\r\7\u\2\k\0\c\k\m\n\d\v\f\0\4\k\3\6\5\p\y\e\k\4\b\v\m\a\l\8\9\m\p\4\r\b\x\k\j\q\h\4\h\q\e\x\0\t\8\r\v\p\s\7\s\s\g\r\s\7\d\g\a\t\d\p\k\7\b\8\z\u\e\6\4\0\8\y\m\r\b\j\8\v\t\t\m\w\8\4\d\u\9\v\w\r\v\9\z\x\6\n\r\z\3\g\a\g\j\m\9\t\v\9\c\7\c\b\l\o\t\e\s\5\9\6\h\k\9\5\g\r\m\b\e\j\9\q\b\m\m\g\9\d\p\g\g\f\j\q\5\r\r\e\y\x\c\l\r\i\7\8\7\d\q\9\0\l\f\u\y\p\v\a\1\b\9\9\k\z\8\k\x\8\a\3\5\z\s\9\t\b\6\b\l\b\c\t\q\s\5\b\4\1\l\f\u\2\3\l\v\2\7\6\c\5\o\1\3\g\g\u\m\b\y\q\7\p\e\b\a\s\y\0\v\q\2\b\c\l\z\e\l\p\3\t\t\o\j\n\7\v\3\7\t\p\k\u\f\1\3\k\z\j\h\i\x\m\6\r\4\8\1\w\w\8\n\p\e\c\c\k\w\b\h\o\t\x\p\w\p\5\7\x\y\9\i\d\i\b\h\7\j\f\n\q\l\j\b\k\5\f\q\3\c\0\k\g\j\j\c\m\2\i\v\u\b\o\0\a\m\r\z\w\r\m\d\z\k\5\7\q\l\f\f\a\3\f\0\x\s\l\t\n\z\4\9\c\s\g\3\x\8\f\u\c\e\k\b\o\c\t\8\s\b\0\9\r\f\o\u\k\j ]] 00:44:37.605 09:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:37.605 09:11:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:44:37.605 [2024-07-12 09:11:12.552454] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:37.605 [2024-07-12 09:11:12.552655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170722 ] 00:44:37.605 [2024-07-12 09:11:12.726020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.862 [2024-07-12 09:11:12.959487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.496  Copying: 512/512 [B] (average 166 kBps) 00:44:39.496 00:44:39.496 09:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hebw19mqwlsnuuz9inz9ez5hlrn3t57etllyjb4mmwux4sn8s2t7ear9x6f2aojuhttt8y2eosp88dtt2ajap5y117jlwn98s852bi64ui2sy35y1fxcq79cobn1d1oa1r7u2k0ckmndvf04k365pyek4bvmal89mp4rbxkjqh4hqex0t8rvps7ssgrs7dgatdpk7b8zue6408ymrbj8vttmw84du9vwrv9zx6nrz3gagjm9tv9c7cblotes596hk95grmbej9qbmmg9dpggfjq5rreyxclri787dq90lfuypva1b99kz8kx8a35zs9tb6blbctqs5b41lfu23lv276c5o13ggumbyq7pebasy0vq2bclzelp3ttojn7v37tpkuf13kzjhixm6r481ww8npecckwbhotxpwp57xy9idibh7jfnqljbk5fq3c0kgjjcm2ivubo0amrzwrmdzk57qlffa3f0xsltnz49csg3x8fucekboct8sb09rfoukj == \h\e\b\w\1\9\m\q\w\l\s\n\u\u\z\9\i\n\z\9\e\z\5\h\l\r\n\3\t\5\7\e\t\l\l\y\j\b\4\m\m\w\u\x\4\s\n\8\s\2\t\7\e\a\r\9\x\6\f\2\a\o\j\u\h\t\t\t\8\y\2\e\o\s\p\8\8\d\t\t\2\a\j\a\p\5\y\1\1\7\j\l\w\n\9\8\s\8\5\2\b\i\6\4\u\i\2\s\y\3\5\y\1\f\x\c\q\7\9\c\o\b\n\1\d\1\o\a\1\r\7\u\2\k\0\c\k\m\n\d\v\f\0\4\k\3\6\5\p\y\e\k\4\b\v\m\a\l\8\9\m\p\4\r\b\x\k\j\q\h\4\h\q\e\x\0\t\8\r\v\p\s\7\s\s\g\r\s\7\d\g\a\t\d\p\k\7\b\8\z\u\e\6\4\0\8\y\m\r\b\j\8\v\t\t\m\w\8\4\d\u\9\v\w\r\v\9\z\x\6\n\r\z\3\g\a\g\j\m\9\t\v\9\c\7\c\b\l\o\t\e\s\5\9\6\h\k\9\5\g\r\m\b\e\j\9\q\b\m\m\g\9\d\p\g\g\f\j\q\5\r\r\e\y\x\c\l\r\i\7\8\7\d\q\9\0\l\f\u\y\p\v\a\1\b\9\9\k\z\8\k\x\8\a\3\5\z\s\9\t\b\6\b\l\b\c\t\q\s\5\b\4\1\l\f\u\2\3\l\v\2\7\6\c\5\o\1\3\g\g\u\m\b\y\q\7\p\e\b\a\s\y\0\v\q\2\b\c\l\z\e\l\p\3\t\t\o\j\n\7\v\3\7\t\p\k\u\f\1\3\k\z\j\h\i\x\m\6\r\4\8\1\w\w\8\n\p\e\c\c\k\w\b\h\o\t\x\p\w\p\5\7\x\y\9\i\d\i\b\h\7\j\f\n\q\l\j\b\k\5\f\q\3\c\0\k\g\j\j\c\m\2\i\v\u\b\o\0\a\m\r\z\w\r\m\d\z\k\5\7\q\l\f\f\a\3\f\0\x\s\l\t\n\z\4\9\c\s\g\3\x\8\f\u\c\e\k\b\o\c\t\8\s\b\0\9\r\f\o\u\k\j ]] 00:44:39.496 09:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:44:39.496 09:11:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:44:39.496 [2024-07-12 09:11:14.590772] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:39.496 [2024-07-12 09:11:14.591604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170747 ] 00:44:39.753 [2024-07-12 09:11:14.764591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.011 [2024-07-12 09:11:14.992024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:41.641  Copying: 512/512 [B] (average 250 kBps) 00:44:41.641 00:44:41.641 ************************************ 00:44:41.641 END TEST dd_flags_misc_forced_aio 00:44:41.641 ************************************ 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hebw19mqwlsnuuz9inz9ez5hlrn3t57etllyjb4mmwux4sn8s2t7ear9x6f2aojuhttt8y2eosp88dtt2ajap5y117jlwn98s852bi64ui2sy35y1fxcq79cobn1d1oa1r7u2k0ckmndvf04k365pyek4bvmal89mp4rbxkjqh4hqex0t8rvps7ssgrs7dgatdpk7b8zue6408ymrbj8vttmw84du9vwrv9zx6nrz3gagjm9tv9c7cblotes596hk95grmbej9qbmmg9dpggfjq5rreyxclri787dq90lfuypva1b99kz8kx8a35zs9tb6blbctqs5b41lfu23lv276c5o13ggumbyq7pebasy0vq2bclzelp3ttojn7v37tpkuf13kzjhixm6r481ww8npecckwbhotxpwp57xy9idibh7jfnqljbk5fq3c0kgjjcm2ivubo0amrzwrmdzk57qlffa3f0xsltnz49csg3x8fucekboct8sb09rfoukj == \h\e\b\w\1\9\m\q\w\l\s\n\u\u\z\9\i\n\z\9\e\z\5\h\l\r\n\3\t\5\7\e\t\l\l\y\j\b\4\m\m\w\u\x\4\s\n\8\s\2\t\7\e\a\r\9\x\6\f\2\a\o\j\u\h\t\t\t\8\y\2\e\o\s\p\8\8\d\t\t\2\a\j\a\p\5\y\1\1\7\j\l\w\n\9\8\s\8\5\2\b\i\6\4\u\i\2\s\y\3\5\y\1\f\x\c\q\7\9\c\o\b\n\1\d\1\o\a\1\r\7\u\2\k\0\c\k\m\n\d\v\f\0\4\k\3\6\5\p\y\e\k\4\b\v\m\a\l\8\9\m\p\4\r\b\x\k\j\q\h\4\h\q\e\x\0\t\8\r\v\p\s\7\s\s\g\r\s\7\d\g\a\t\d\p\k\7\b\8\z\u\e\6\4\0\8\y\m\r\b\j\8\v\t\t\m\w\8\4\d\u\9\v\w\r\v\9\z\x\6\n\r\z\3\g\a\g\j\m\9\t\v\9\c\7\c\b\l\o\t\e\s\5\9\6\h\k\9\5\g\r\m\b\e\j\9\q\b\m\m\g\9\d\p\g\g\f\j\q\5\r\r\e\y\x\c\l\r\i\7\8\7\d\q\9\0\l\f\u\y\p\v\a\1\b\9\9\k\z\8\k\x\8\a\3\5\z\s\9\t\b\6\b\l\b\c\t\q\s\5\b\4\1\l\f\u\2\3\l\v\2\7\6\c\5\o\1\3\g\g\u\m\b\y\q\7\p\e\b\a\s\y\0\v\q\2\b\c\l\z\e\l\p\3\t\t\o\j\n\7\v\3\7\t\p\k\u\f\1\3\k\z\j\h\i\x\m\6\r\4\8\1\w\w\8\n\p\e\c\c\k\w\b\h\o\t\x\p\w\p\5\7\x\y\9\i\d\i\b\h\7\j\f\n\q\l\j\b\k\5\f\q\3\c\0\k\g\j\j\c\m\2\i\v\u\b\o\0\a\m\r\z\w\r\m\d\z\k\5\7\q\l\f\f\a\3\f\0\x\s\l\t\n\z\4\9\c\s\g\3\x\8\f\u\c\e\k\b\o\c\t\8\s\b\0\9\r\f\o\u\k\j ]] 00:44:41.641 00:44:41.641 real 0m16.420s 00:44:41.641 user 0m13.365s 00:44:41.641 sys 0m1.977s 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:44:41.641 00:44:41.641 real 1m6.157s 00:44:41.641 user 0m51.976s 00:44:41.641 sys 0m8.006s 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:41.641 ************************************ 00:44:41.641 09:11:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:44:41.641 END TEST spdk_dd_posix 00:44:41.641 ************************************ 00:44:41.641 09:11:16 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:44:41.641 09:11:16 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:44:41.641 09:11:16 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:41.641 09:11:16 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:41.641 09:11:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:44:41.641 ************************************ 00:44:41.641 START TEST spdk_dd_malloc 00:44:41.641 ************************************ 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:44:41.641 * Looking for test storage... 00:44:41.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:41.641 09:11:16 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:44:41.642 ************************************ 00:44:41.642 START TEST dd_malloc_copy 00:44:41.642 ************************************ 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:44:41.642 09:11:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:44:41.642 [2024-07-12 09:11:16.759751] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:41.642 [2024-07-12 09:11:16.760110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170838 ] 00:44:41.642 { 00:44:41.642 "subsystems": [ 00:44:41.642 { 00:44:41.642 "subsystem": "bdev", 00:44:41.642 "config": [ 00:44:41.642 { 00:44:41.642 "params": { 00:44:41.642 "num_blocks": 1048576, 00:44:41.642 "block_size": 512, 00:44:41.642 "name": "malloc0" 00:44:41.642 }, 00:44:41.642 "method": "bdev_malloc_create" 00:44:41.642 }, 00:44:41.642 { 00:44:41.642 "params": { 00:44:41.642 "num_blocks": 1048576, 00:44:41.642 "block_size": 512, 00:44:41.642 "name": "malloc1" 00:44:41.642 }, 00:44:41.642 "method": "bdev_malloc_create" 00:44:41.642 }, 00:44:41.642 { 00:44:41.642 "method": "bdev_wait_for_examine" 00:44:41.642 } 00:44:41.642 ] 00:44:41.642 } 00:44:41.642 ] 00:44:41.642 } 00:44:41.899 [2024-07-12 09:11:16.919309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:42.156 [2024-07-12 09:11:17.144684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:50.741  Copying: 164/512 [MB] (164 MBps) Copying: 327/512 [MB] (162 MBps) Copying: 495/512 [MB] (167 MBps) Copying: 512/512 [MB] (average 165 MBps) 00:44:50.741 00:44:50.741 09:11:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:44:50.741 09:11:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:44:50.741 09:11:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:44:50.741 09:11:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:44:50.741 { 00:44:50.741 "subsystems": [ 00:44:50.741 { 00:44:50.741 "subsystem": "bdev", 00:44:50.741 "config": [ 00:44:50.741 { 00:44:50.741 "params": { 00:44:50.741 "num_blocks": 1048576, 00:44:50.741 "block_size": 512, 00:44:50.741 "name": "malloc0" 00:44:50.741 }, 00:44:50.741 "method": "bdev_malloc_create" 00:44:50.741 }, 00:44:50.741 { 00:44:50.741 "params": { 00:44:50.741 "num_blocks": 1048576, 00:44:50.741 "block_size": 512, 00:44:50.741 "name": "malloc1" 00:44:50.741 }, 00:44:50.741 "method": "bdev_malloc_create" 00:44:50.741 }, 00:44:50.741 { 00:44:50.741 "method": "bdev_wait_for_examine" 00:44:50.741 } 00:44:50.741 ] 00:44:50.741 } 00:44:50.741 ] 00:44:50.741 } 00:44:50.741 [2024-07-12 09:11:25.609154] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:50.741 [2024-07-12 09:11:25.609524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170959 ] 00:44:50.741 [2024-07-12 09:11:25.786870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:50.998 [2024-07-12 09:11:26.022318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:59.574  Copying: 164/512 [MB] (164 MBps) Copying: 334/512 [MB] (169 MBps) Copying: 503/512 [MB] (168 MBps) Copying: 512/512 [MB] (average 167 MBps) 00:44:59.574 00:44:59.574 ************************************ 00:44:59.574 END TEST dd_malloc_copy 00:44:59.574 ************************************ 00:44:59.574 00:44:59.574 real 0m17.593s 00:44:59.574 user 0m16.162s 00:44:59.574 sys 0m1.305s 00:44:59.574 09:11:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:59.574 09:11:34 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:44:59.574 09:11:34 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:44:59.574 00:44:59.574 real 0m17.728s 00:44:59.574 user 0m16.244s 00:44:59.574 sys 0m1.360s 00:44:59.574 ************************************ 00:44:59.574 END TEST spdk_dd_malloc 00:44:59.574 ************************************ 00:44:59.574 09:11:34 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:59.574 09:11:34 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:44:59.574 09:11:34 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:44:59.574 09:11:34 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:44:59.574 09:11:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:44:59.574 09:11:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:59.574 09:11:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:44:59.574 ************************************ 00:44:59.574 START TEST spdk_dd_bdev_to_bdev 00:44:59.574 ************************************ 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:44:59.574 * Looking for test storage... 00:44:59.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:44:59.574 09:11:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:44:59.574 [2024-07-12 09:11:34.536007] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:44:59.574 [2024-07-12 09:11:34.536469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171132 ] 00:44:59.574 [2024-07-12 09:11:34.710292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:59.832 [2024-07-12 09:11:34.972779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.770  Copying: 256/256 [MB] (average 1117 MBps) 00:45:01.770 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:01.770 ************************************ 00:45:01.770 START TEST dd_inflate_file 00:45:01.770 ************************************ 00:45:01.770 09:11:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:45:01.770 [2024-07-12 09:11:36.837381] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:01.770 [2024-07-12 09:11:36.837841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171162 ] 00:45:02.028 [2024-07-12 09:11:37.009050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:02.286 [2024-07-12 09:11:37.282229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:03.917  Copying: 64/64 [MB] (average 1163 MBps) 00:45:03.917 00:45:03.917 ************************************ 00:45:03.917 END TEST dd_inflate_file 00:45:03.917 ************************************ 00:45:03.917 00:45:03.917 real 0m2.116s 00:45:03.917 user 0m1.706s 00:45:03.917 sys 0m0.277s 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:03.917 ************************************ 00:45:03.917 START TEST dd_copy_to_out_bdev 00:45:03.917 ************************************ 00:45:03.917 09:11:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:45:03.917 { 00:45:03.917 "subsystems": [ 00:45:03.917 { 00:45:03.917 "subsystem": "bdev", 00:45:03.917 "config": [ 00:45:03.917 { 00:45:03.917 "params": { 00:45:03.917 "block_size": 4096, 00:45:03.917 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:03.917 "name": "aio1" 00:45:03.917 }, 00:45:03.917 "method": "bdev_aio_create" 00:45:03.917 }, 00:45:03.917 { 00:45:03.917 "params": { 00:45:03.917 "trtype": "pcie", 00:45:03.917 "traddr": "0000:00:10.0", 00:45:03.917 "name": "Nvme0" 00:45:03.917 }, 00:45:03.917 "method": "bdev_nvme_attach_controller" 00:45:03.917 }, 00:45:03.917 { 00:45:03.917 "method": "bdev_wait_for_examine" 00:45:03.917 } 00:45:03.917 ] 00:45:03.917 } 00:45:03.917 ] 00:45:03.917 } 00:45:03.917 [2024-07-12 09:11:39.011145] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:03.917 [2024-07-12 09:11:39.011488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171220 ] 00:45:04.175 [2024-07-12 09:11:39.182015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:04.433 [2024-07-12 09:11:39.397722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.741  Copying: 54/64 [MB] (54 MBps) Copying: 64/64 [MB] (average 54 MBps) 00:45:07.741 00:45:07.741 ************************************ 00:45:07.741 END TEST dd_copy_to_out_bdev 00:45:07.741 ************************************ 00:45:07.741 00:45:07.741 real 0m3.533s 00:45:07.741 user 0m3.145s 00:45:07.741 sys 0m0.305s 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:07.741 ************************************ 00:45:07.741 START TEST dd_offset_magic 00:45:07.741 ************************************ 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:45:07.741 09:11:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:45:07.741 { 00:45:07.741 "subsystems": [ 00:45:07.741 { 00:45:07.741 "subsystem": "bdev", 00:45:07.741 "config": [ 00:45:07.741 { 00:45:07.741 "params": { 00:45:07.741 "block_size": 4096, 00:45:07.741 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:07.741 "name": "aio1" 00:45:07.741 }, 00:45:07.741 "method": "bdev_aio_create" 00:45:07.741 }, 00:45:07.741 { 00:45:07.741 "params": { 00:45:07.741 "trtype": "pcie", 00:45:07.741 "traddr": "0000:00:10.0", 00:45:07.741 "name": "Nvme0" 00:45:07.741 }, 00:45:07.741 "method": "bdev_nvme_attach_controller" 00:45:07.741 }, 00:45:07.741 { 00:45:07.741 "method": "bdev_wait_for_examine" 00:45:07.741 } 00:45:07.741 ] 00:45:07.741 } 00:45:07.741 ] 00:45:07.741 } 00:45:07.741 [2024-07-12 09:11:42.605069] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:07.741 [2024-07-12 09:11:42.605333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171307 ] 00:45:07.741 [2024-07-12 09:11:42.786823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:07.999 [2024-07-12 09:11:43.014842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:09.937  Copying: 65/65 [MB] (average 278 MBps) 00:45:09.937 00:45:09.937 09:11:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:45:09.937 09:11:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:45:09.937 09:11:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:45:09.937 09:11:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:45:09.937 { 00:45:09.937 "subsystems": [ 00:45:09.937 { 00:45:09.937 "subsystem": "bdev", 00:45:09.937 "config": [ 00:45:09.937 { 00:45:09.937 "params": { 00:45:09.937 "block_size": 4096, 00:45:09.937 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:09.937 "name": "aio1" 00:45:09.937 }, 00:45:09.938 "method": "bdev_aio_create" 00:45:09.938 }, 00:45:09.938 { 00:45:09.938 "params": { 00:45:09.938 "trtype": "pcie", 00:45:09.938 "traddr": "0000:00:10.0", 00:45:09.938 "name": "Nvme0" 00:45:09.938 }, 00:45:09.938 "method": "bdev_nvme_attach_controller" 00:45:09.938 }, 00:45:09.938 { 00:45:09.938 "method": "bdev_wait_for_examine" 00:45:09.938 } 00:45:09.938 ] 00:45:09.938 } 00:45:09.938 ] 00:45:09.938 } 00:45:09.938 [2024-07-12 09:11:44.907295] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:09.938 [2024-07-12 09:11:44.907658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171339 ] 00:45:09.938 [2024-07-12 09:11:45.081898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:10.195 [2024-07-12 09:11:45.336760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:12.134  Copying: 1024/1024 [kB] (average 1000 MBps) 00:45:12.134 00:45:12.134 09:11:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:45:12.134 09:11:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:45:12.134 09:11:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:45:12.134 09:11:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:45:12.134 09:11:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:45:12.134 09:11:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:45:12.134 09:11:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:45:12.134 { 00:45:12.134 "subsystems": [ 00:45:12.134 { 00:45:12.134 "subsystem": "bdev", 00:45:12.134 "config": [ 00:45:12.134 { 00:45:12.134 "params": { 00:45:12.134 "block_size": 4096, 00:45:12.134 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:12.134 "name": "aio1" 00:45:12.134 }, 00:45:12.134 "method": "bdev_aio_create" 00:45:12.134 }, 00:45:12.134 { 00:45:12.134 "params": { 00:45:12.134 "trtype": "pcie", 00:45:12.134 "traddr": "0000:00:10.0", 00:45:12.134 "name": "Nvme0" 00:45:12.134 }, 00:45:12.134 "method": "bdev_nvme_attach_controller" 00:45:12.134 }, 00:45:12.134 { 00:45:12.134 "method": "bdev_wait_for_examine" 00:45:12.134 } 00:45:12.134 ] 00:45:12.134 } 00:45:12.134 ] 00:45:12.134 } 00:45:12.134 [2024-07-12 09:11:47.073118] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:12.134 [2024-07-12 09:11:47.073358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171368 ] 00:45:12.134 [2024-07-12 09:11:47.247353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:12.392 [2024-07-12 09:11:47.466490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:14.331  Copying: 65/65 [MB] (average 333 MBps) 00:45:14.331 00:45:14.331 09:11:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:45:14.331 09:11:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:45:14.331 09:11:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:45:14.331 09:11:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:45:14.331 { 00:45:14.331 "subsystems": [ 00:45:14.331 { 00:45:14.331 "subsystem": "bdev", 00:45:14.331 "config": [ 00:45:14.331 { 00:45:14.331 "params": { 00:45:14.331 "block_size": 4096, 00:45:14.331 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:14.331 "name": "aio1" 00:45:14.331 }, 00:45:14.331 "method": "bdev_aio_create" 00:45:14.331 }, 00:45:14.331 { 00:45:14.331 "params": { 00:45:14.331 "trtype": "pcie", 00:45:14.331 "traddr": "0000:00:10.0", 00:45:14.331 "name": "Nvme0" 00:45:14.331 }, 00:45:14.331 "method": "bdev_nvme_attach_controller" 00:45:14.331 }, 00:45:14.331 { 00:45:14.331 "method": "bdev_wait_for_examine" 00:45:14.331 } 00:45:14.331 ] 00:45:14.331 } 00:45:14.331 ] 00:45:14.331 } 00:45:14.331 [2024-07-12 09:11:49.360834] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:14.332 [2024-07-12 09:11:49.361288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171401 ] 00:45:14.589 [2024-07-12 09:11:49.537710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:14.846 [2024-07-12 09:11:49.794394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:16.538  Copying: 1024/1024 [kB] (average 1000 MBps) 00:45:16.538 00:45:16.538 ************************************ 00:45:16.538 END TEST dd_offset_magic 00:45:16.538 ************************************ 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:45:16.538 00:45:16.538 real 0m9.044s 00:45:16.538 user 0m7.190s 00:45:16.538 sys 0m1.109s 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:45:16.538 09:11:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:16.538 [2024-07-12 09:11:51.677094] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:16.538 [2024-07-12 09:11:51.677556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171453 ] 00:45:16.538 { 00:45:16.538 "subsystems": [ 00:45:16.538 { 00:45:16.538 "subsystem": "bdev", 00:45:16.538 "config": [ 00:45:16.538 { 00:45:16.538 "params": { 00:45:16.538 "block_size": 4096, 00:45:16.538 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:16.538 "name": "aio1" 00:45:16.538 }, 00:45:16.538 "method": "bdev_aio_create" 00:45:16.538 }, 00:45:16.538 { 00:45:16.538 "params": { 00:45:16.538 "trtype": "pcie", 00:45:16.538 "traddr": "0000:00:10.0", 00:45:16.538 "name": "Nvme0" 00:45:16.538 }, 00:45:16.538 "method": "bdev_nvme_attach_controller" 00:45:16.538 }, 00:45:16.538 { 00:45:16.538 "method": "bdev_wait_for_examine" 00:45:16.538 } 00:45:16.538 ] 00:45:16.538 } 00:45:16.538 ] 00:45:16.538 } 00:45:16.795 [2024-07-12 09:11:51.853396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.053 [2024-07-12 09:11:52.112103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:18.550  Copying: 5120/5120 [kB] (average 1250 MBps) 00:45:18.550 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:45:18.550 09:11:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:18.808 [2024-07-12 09:11:53.745888] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:18.808 [2024-07-12 09:11:53.746842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171503 ] 00:45:18.808 { 00:45:18.808 "subsystems": [ 00:45:18.808 { 00:45:18.808 "subsystem": "bdev", 00:45:18.808 "config": [ 00:45:18.808 { 00:45:18.808 "params": { 00:45:18.808 "block_size": 4096, 00:45:18.808 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:45:18.808 "name": "aio1" 00:45:18.808 }, 00:45:18.808 "method": "bdev_aio_create" 00:45:18.808 }, 00:45:18.808 { 00:45:18.808 "params": { 00:45:18.808 "trtype": "pcie", 00:45:18.808 "traddr": "0000:00:10.0", 00:45:18.808 "name": "Nvme0" 00:45:18.808 }, 00:45:18.808 "method": "bdev_nvme_attach_controller" 00:45:18.808 }, 00:45:18.808 { 00:45:18.808 "method": "bdev_wait_for_examine" 00:45:18.808 } 00:45:18.808 ] 00:45:18.808 } 00:45:18.808 ] 00:45:18.808 } 00:45:18.808 [2024-07-12 09:11:53.919812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:19.067 [2024-07-12 09:11:54.158467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:20.998  Copying: 5120/5120 [kB] (average 416 MBps) 00:45:20.998 00:45:20.998 09:11:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:45:20.998 ************************************ 00:45:20.998 END TEST spdk_dd_bdev_to_bdev 00:45:20.998 ************************************ 00:45:20.998 00:45:20.998 real 0m21.736s 00:45:20.998 user 0m17.553s 00:45:20.998 sys 0m2.846s 00:45:20.998 09:11:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:20.998 09:11:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:20.998 09:11:56 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:45:20.998 09:11:56 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:45:20.998 09:11:56 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:45:20.998 09:11:56 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:20.998 09:11:56 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:20.998 09:11:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:45:20.998 ************************************ 00:45:20.998 START TEST spdk_dd_sparse 00:45:20.998 ************************************ 00:45:20.998 09:11:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:45:21.254 * Looking for test storage... 00:45:21.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:45:21.255 1+0 records in 00:45:21.255 1+0 records out 00:45:21.255 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00740454 s, 566 MB/s 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:45:21.255 1+0 records in 00:45:21.255 1+0 records out 00:45:21.255 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00733001 s, 572 MB/s 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:45:21.255 1+0 records in 00:45:21.255 1+0 records out 00:45:21.255 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00635743 s, 660 MB/s 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:45:21.255 ************************************ 00:45:21.255 START TEST dd_sparse_file_to_file 00:45:21.255 ************************************ 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:45:21.255 09:11:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:45:21.255 { 00:45:21.255 "subsystems": [ 00:45:21.255 { 00:45:21.255 "subsystem": "bdev", 00:45:21.255 "config": [ 00:45:21.255 { 00:45:21.255 "params": { 00:45:21.255 "block_size": 4096, 00:45:21.255 "filename": "dd_sparse_aio_disk", 00:45:21.255 "name": "dd_aio" 00:45:21.255 }, 00:45:21.255 "method": "bdev_aio_create" 00:45:21.255 }, 00:45:21.255 { 00:45:21.255 "params": { 00:45:21.255 "lvs_name": "dd_lvstore", 00:45:21.255 "bdev_name": "dd_aio" 00:45:21.255 }, 00:45:21.255 "method": "bdev_lvol_create_lvstore" 00:45:21.255 }, 00:45:21.255 { 00:45:21.255 "method": "bdev_wait_for_examine" 00:45:21.255 } 00:45:21.255 ] 00:45:21.255 } 00:45:21.255 ] 00:45:21.255 } 00:45:21.255 [2024-07-12 09:11:56.348228] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:21.255 [2024-07-12 09:11:56.348619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171593 ] 00:45:21.512 [2024-07-12 09:11:56.522714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:21.769 [2024-07-12 09:11:56.759324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:23.398  Copying: 12/36 [MB] (average 857 MBps) 00:45:23.398 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:45:23.398 ************************************ 00:45:23.398 END TEST dd_sparse_file_to_file 00:45:23.398 ************************************ 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:45:23.398 00:45:23.398 real 0m2.227s 00:45:23.398 user 0m1.761s 00:45:23.398 sys 0m0.304s 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:45:23.398 ************************************ 00:45:23.398 START TEST dd_sparse_file_to_bdev 00:45:23.398 ************************************ 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size_in_mib"]=36 ["thin_provision"]=true) 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:45:23.398 09:11:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:23.656 [2024-07-12 09:11:58.615166] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:23.656 [2024-07-12 09:11:58.616214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171653 ] 00:45:23.656 { 00:45:23.656 "subsystems": [ 00:45:23.656 { 00:45:23.656 "subsystem": "bdev", 00:45:23.656 "config": [ 00:45:23.656 { 00:45:23.656 "params": { 00:45:23.656 "block_size": 4096, 00:45:23.656 "filename": "dd_sparse_aio_disk", 00:45:23.656 "name": "dd_aio" 00:45:23.656 }, 00:45:23.656 "method": "bdev_aio_create" 00:45:23.656 }, 00:45:23.656 { 00:45:23.656 "params": { 00:45:23.656 "size_in_mib": 36, 00:45:23.656 "lvs_name": "dd_lvstore", 00:45:23.656 "thin_provision": true, 00:45:23.656 "lvol_name": "dd_lvol" 00:45:23.656 }, 00:45:23.656 "method": "bdev_lvol_create" 00:45:23.656 }, 00:45:23.656 { 00:45:23.656 "method": "bdev_wait_for_examine" 00:45:23.656 } 00:45:23.656 ] 00:45:23.656 } 00:45:23.656 ] 00:45:23.656 } 00:45:23.656 [2024-07-12 09:11:58.781931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:23.913 [2024-07-12 09:11:59.035917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:25.881  Copying: 12/36 [MB] (average 521 MBps) 00:45:25.881 00:45:25.881 ************************************ 00:45:25.881 END TEST dd_sparse_file_to_bdev 00:45:25.881 ************************************ 00:45:25.881 00:45:25.881 real 0m2.174s 00:45:25.881 user 0m1.809s 00:45:25.881 sys 0m0.269s 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:45:25.881 ************************************ 00:45:25.881 START TEST dd_sparse_bdev_to_file 00:45:25.881 ************************************ 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:45:25.881 09:12:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:45:25.881 { 00:45:25.881 "subsystems": [ 00:45:25.881 { 00:45:25.881 "subsystem": "bdev", 00:45:25.881 "config": [ 00:45:25.881 { 00:45:25.881 "params": { 00:45:25.881 "block_size": 4096, 00:45:25.881 "filename": "dd_sparse_aio_disk", 00:45:25.881 "name": "dd_aio" 00:45:25.881 }, 00:45:25.881 "method": "bdev_aio_create" 00:45:25.881 }, 00:45:25.882 { 00:45:25.882 "method": "bdev_wait_for_examine" 00:45:25.882 } 00:45:25.882 ] 00:45:25.882 } 00:45:25.882 ] 00:45:25.882 } 00:45:25.882 [2024-07-12 09:12:00.871785] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:25.882 [2024-07-12 09:12:00.873019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171718 ] 00:45:25.882 [2024-07-12 09:12:01.049626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.140 [2024-07-12 09:12:01.308847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:28.079  Copying: 12/36 [MB] (average 923 MBps) 00:45:28.079 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:45:28.079 ************************************ 00:45:28.079 END TEST dd_sparse_bdev_to_file 00:45:28.079 ************************************ 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:45:28.079 00:45:28.079 real 0m2.419s 00:45:28.079 user 0m2.045s 00:45:28.079 sys 0m0.264s 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:45:28.079 ************************************ 00:45:28.079 END TEST spdk_dd_sparse 00:45:28.079 ************************************ 00:45:28.079 00:45:28.079 real 0m7.105s 00:45:28.079 user 0m5.766s 00:45:28.079 sys 0m0.959s 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:28.079 09:12:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:45:28.337 09:12:03 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:45:28.337 09:12:03 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:45:28.337 09:12:03 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:28.337 09:12:03 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:28.337 09:12:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:45:28.337 ************************************ 00:45:28.337 START TEST spdk_dd_negative 00:45:28.337 ************************************ 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:45:28.337 * Looking for test storage... 00:45:28.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:28.337 ************************************ 00:45:28.337 START TEST dd_invalid_arguments 00:45:28.337 ************************************ 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:28.337 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:45:28.337 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:45:28.337 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:45:28.337 00:45:28.337 CPU options: 00:45:28.337 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:45:28.337 (like [0,1,10]) 00:45:28.337 --lcores lcore to CPU mapping list. The list is in the format: 00:45:28.337 [<,lcores[@CPUs]>...] 00:45:28.337 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:45:28.337 Within the group, '-' is used for range separator, 00:45:28.337 ',' is used for single number separator. 00:45:28.337 '( )' can be omitted for single element group, 00:45:28.337 '@' can be omitted if cpus and lcores have the same value 00:45:28.337 --disable-cpumask-locks Disable CPU core lock files. 00:45:28.337 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:45:28.337 pollers in the app support interrupt mode) 00:45:28.337 -p, --main-core main (primary) core for DPDK 00:45:28.337 00:45:28.337 Configuration options: 00:45:28.337 -c, --config, --json JSON config file 00:45:28.337 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:45:28.337 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:45:28.337 --wait-for-rpc wait for RPCs to initialize subsystems 00:45:28.337 --rpcs-allowed comma-separated list of permitted RPCS 00:45:28.337 --json-ignore-init-errors don't exit on invalid config entry 00:45:28.337 00:45:28.337 Memory options: 00:45:28.337 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:45:28.337 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:45:28.337 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:45:28.337 -R, --huge-unlink unlink huge files after initialization 00:45:28.337 -n, --mem-channels number of memory channels used for DPDK 00:45:28.337 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:45:28.338 --msg-mempool-size global message memory pool size in count (default: 262143) 00:45:28.338 --no-huge run without using hugepages 00:45:28.338 -i, --shm-id shared memory ID (optional) 00:45:28.338 -g, --single-file-segments force creating just one hugetlbfs file 00:45:28.338 00:45:28.338 PCI options: 00:45:28.338 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:45:28.338 -B, --pci-blocked pci addr to block (can be used more than once) 00:45:28.338 -u, --no-pci disable PCI access 00:45:28.338 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:45:28.338 00:45:28.338 Log options: 00:45:28.338 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:45:28.338 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:45:28.338 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:45:28.338 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:45:28.338 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:45:28.338 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:45:28.338 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:45:28.338 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:45:28.338 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:45:28.338 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:45:28.338 virtio_vfio_user, vmd) 00:45:28.338 --silence-noticelog disable notice level logging to stderr 00:45:28.338 00:45:28.338 Trace options: 00:45:28.338 --num-trace-entries number of trace entries for each core, must be power of 2, 00:45:28.338 setting 0 to disable trace (default 32768) 00:45:28.338 Tracepoints vary in size and can use more than one trace entry. 00:45:28.338 -e, --tpoint-group [:] 00:45:28.338 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:45:28.338 [2024-07-12 09:12:03.494287] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:45:28.596 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:45:28.596 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:45:28.596 a tracepoint group. First tpoint inside a group can be enabled by 00:45:28.596 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:45:28.596 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:45:28.596 in /include/spdk_internal/trace_defs.h 00:45:28.596 00:45:28.596 Other options: 00:45:28.596 -h, --help show this usage 00:45:28.596 -v, --version print SPDK version 00:45:28.596 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:45:28.596 --env-context Opaque context for use of the env implementation 00:45:28.596 00:45:28.596 Application specific: 00:45:28.596 [--------- DD Options ---------] 00:45:28.596 --if Input file. Must specify either --if or --ib. 00:45:28.596 --ib Input bdev. Must specifier either --if or --ib 00:45:28.596 --of Output file. Must specify either --of or --ob. 00:45:28.596 --ob Output bdev. Must specify either --of or --ob. 00:45:28.596 --iflag Input file flags. 00:45:28.596 --oflag Output file flags. 00:45:28.596 --bs I/O unit size (default: 4096) 00:45:28.596 --qd Queue depth (default: 2) 00:45:28.596 --count I/O unit count. The number of I/O units to copy. (default: all) 00:45:28.596 --skip Skip this many I/O units at start of input. (default: 0) 00:45:28.596 --seek Skip this many I/O units at start of output. (default: 0) 00:45:28.596 --aio Force usage of AIO. (by default io_uring is used if available) 00:45:28.596 --sparse Enable hole skipping in input target 00:45:28.596 Available iflag and oflag values: 00:45:28.596 append - append mode 00:45:28.596 direct - use direct I/O for data 00:45:28.596 directory - fail unless a directory 00:45:28.596 dsync - use synchronized I/O for data 00:45:28.596 noatime - do not update access time 00:45:28.596 noctty - do not assign controlling terminal from file 00:45:28.596 nofollow - do not follow symlinks 00:45:28.596 nonblock - use non-blocking I/O 00:45:28.596 sync - use synchronized I/O for data and metadata 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:28.596 ************************************ 00:45:28.596 END TEST dd_invalid_arguments 00:45:28.596 ************************************ 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:28.596 00:45:28.596 real 0m0.118s 00:45:28.596 user 0m0.068s 00:45:28.596 sys 0m0.050s 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:28.596 ************************************ 00:45:28.596 START TEST dd_double_input 00:45:28.596 ************************************ 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.596 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:45:28.597 [2024-07-12 09:12:03.687384] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:28.597 00:45:28.597 real 0m0.148s 00:45:28.597 user 0m0.066s 00:45:28.597 sys 0m0.081s 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:28.597 ************************************ 00:45:28.597 END TEST dd_double_input 00:45:28.597 ************************************ 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:28.597 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:28.880 ************************************ 00:45:28.880 START TEST dd_double_output 00:45:28.880 ************************************ 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:45:28.880 [2024-07-12 09:12:03.859053] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:28.880 00:45:28.880 real 0m0.124s 00:45:28.880 user 0m0.056s 00:45:28.880 sys 0m0.068s 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:45:28.880 ************************************ 00:45:28.880 END TEST dd_double_output 00:45:28.880 ************************************ 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:28.880 ************************************ 00:45:28.880 START TEST dd_no_input 00:45:28.880 ************************************ 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.880 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.881 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.881 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:28.881 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:28.881 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:28.881 09:12:03 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:45:28.881 [2024-07-12 09:12:04.030285] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:29.139 00:45:29.139 real 0m0.117s 00:45:29.139 user 0m0.071s 00:45:29.139 sys 0m0.046s 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:45:29.139 ************************************ 00:45:29.139 END TEST dd_no_input 00:45:29.139 ************************************ 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:29.139 ************************************ 00:45:29.139 START TEST dd_no_output 00:45:29.139 ************************************ 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:45:29.139 [2024-07-12 09:12:04.202313] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:29.139 ************************************ 00:45:29.139 END TEST dd_no_output 00:45:29.139 ************************************ 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:29.139 00:45:29.139 real 0m0.121s 00:45:29.139 user 0m0.054s 00:45:29.139 sys 0m0.068s 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:29.139 ************************************ 00:45:29.139 START TEST dd_wrong_blocksize 00:45:29.139 ************************************ 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:29.139 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:45:29.397 [2024-07-12 09:12:04.368120] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:29.397 ************************************ 00:45:29.397 END TEST dd_wrong_blocksize 00:45:29.397 ************************************ 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:29.397 00:45:29.397 real 0m0.105s 00:45:29.397 user 0m0.063s 00:45:29.397 sys 0m0.042s 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:29.397 ************************************ 00:45:29.397 START TEST dd_smaller_blocksize 00:45:29.397 ************************************ 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:29.397 09:12:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:45:29.397 [2024-07-12 09:12:04.531490] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:29.397 [2024-07-12 09:12:04.531730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172004 ] 00:45:29.656 [2024-07-12 09:12:04.707898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:29.914 [2024-07-12 09:12:04.971555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:30.480 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:45:30.480 [2024-07-12 09:12:05.657930] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:45:30.480 [2024-07-12 09:12:05.658054] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:31.414 [2024-07-12 09:12:06.436267] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:31.673 00:45:31.673 real 0m2.402s 00:45:31.673 user 0m1.774s 00:45:31.673 sys 0m0.527s 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:31.673 09:12:06 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:45:31.673 ************************************ 00:45:31.673 END TEST dd_smaller_blocksize 00:45:31.673 ************************************ 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:31.931 ************************************ 00:45:31.931 START TEST dd_invalid_count 00:45:31.931 ************************************ 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.931 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:31.932 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:45:31.932 [2024-07-12 09:12:06.988339] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:45:31.932 ************************************ 00:45:31.932 END TEST dd_invalid_count 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:31.932 00:45:31.932 real 0m0.112s 00:45:31.932 user 0m0.039s 00:45:31.932 sys 0m0.074s 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:45:31.932 ************************************ 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:31.932 ************************************ 00:45:31.932 START TEST dd_invalid_oflag 00:45:31.932 ************************************ 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:31.932 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:45:32.190 [2024-07-12 09:12:07.155340] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:32.190 ************************************ 00:45:32.190 END TEST dd_invalid_oflag 00:45:32.190 ************************************ 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:32.190 00:45:32.190 real 0m0.120s 00:45:32.190 user 0m0.077s 00:45:32.190 sys 0m0.043s 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:32.190 ************************************ 00:45:32.190 START TEST dd_invalid_iflag 00:45:32.190 ************************************ 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:45:32.190 [2024-07-12 09:12:07.328557] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:32.190 00:45:32.190 real 0m0.114s 00:45:32.190 user 0m0.057s 00:45:32.190 sys 0m0.055s 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:32.190 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:45:32.190 ************************************ 00:45:32.190 END TEST dd_invalid_iflag 00:45:32.190 ************************************ 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:32.448 ************************************ 00:45:32.448 START TEST dd_unknown_flag 00:45:32.448 ************************************ 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:32.448 09:12:07 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:45:32.448 [2024-07-12 09:12:07.499818] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:32.448 [2024-07-12 09:12:07.500231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172135 ] 00:45:32.706 [2024-07-12 09:12:07.672216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:32.706 [2024-07-12 09:12:07.894966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:33.271 [2024-07-12 09:12:08.216130] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:45:33.271 [2024-07-12 09:12:08.216268] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:33.271  Copying: 0/0 [B] (average 0 Bps)[2024-07-12 09:12:08.216455] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:45:33.878 [2024-07-12 09:12:08.982740] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:34.445 00:45:34.445 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:45:34.445 ************************************ 00:45:34.445 END TEST dd_unknown_flag 00:45:34.445 ************************************ 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:34.445 00:45:34.445 real 0m2.051s 00:45:34.445 user 0m1.648s 00:45:34.445 sys 0m0.259s 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:34.445 ************************************ 00:45:34.445 START TEST dd_invalid_json 00:45:34.445 ************************************ 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:45:34.445 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:45:34.445 [2024-07-12 09:12:09.601521] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:34.445 [2024-07-12 09:12:09.601925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172183 ] 00:45:34.703 [2024-07-12 09:12:09.775268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:34.960 [2024-07-12 09:12:09.990127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:34.960 [2024-07-12 09:12:09.990237] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:45:34.960 [2024-07-12 09:12:09.990297] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:45:34.960 [2024-07-12 09:12:09.990327] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:34.960 [2024-07-12 09:12:09.990390] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:45:35.217 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:45:35.217 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:35.218 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:45:35.218 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:45:35.218 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:45:35.218 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:35.218 00:45:35.218 real 0m0.860s 00:45:35.218 user 0m0.601s 00:45:35.218 sys 0m0.160s 00:45:35.218 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:35.218 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:45:35.218 ************************************ 00:45:35.218 END TEST dd_invalid_json 00:45:35.218 ************************************ 00:45:35.476 09:12:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:45:35.476 ************************************ 00:45:35.476 END TEST spdk_dd_negative 00:45:35.476 ************************************ 00:45:35.476 00:45:35.476 real 0m7.109s 00:45:35.476 user 0m4.980s 00:45:35.476 sys 0m1.747s 00:45:35.476 09:12:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:35.476 09:12:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:45:35.476 09:12:10 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:45:35.476 00:45:35.476 real 2m50.203s 00:45:35.476 user 2m17.921s 00:45:35.476 sys 0m22.108s 00:45:35.476 09:12:10 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:35.476 ************************************ 00:45:35.476 END TEST spdk_dd 00:45:35.476 09:12:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:45:35.476 ************************************ 00:45:35.476 09:12:10 -- common/autotest_common.sh@1142 -- # return 0 00:45:35.476 09:12:10 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:45:35.476 09:12:10 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:45:35.476 09:12:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:45:35.476 09:12:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:35.476 09:12:10 -- common/autotest_common.sh@10 -- # set +x 00:45:35.476 ************************************ 00:45:35.476 START TEST blockdev_nvme 00:45:35.476 ************************************ 00:45:35.476 09:12:10 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:45:35.476 * Looking for test storage... 00:45:35.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:45:35.476 09:12:10 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=172273 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 172273 00:45:35.476 09:12:10 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:45:35.476 09:12:10 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 172273 ']' 00:45:35.476 09:12:10 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:35.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:35.477 09:12:10 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:35.477 09:12:10 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:35.477 09:12:10 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:35.477 09:12:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:35.734 [2024-07-12 09:12:10.685145] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:35.734 [2024-07-12 09:12:10.685346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172273 ] 00:45:35.734 [2024-07-12 09:12:10.844920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:35.992 [2024-07-12 09:12:11.132215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:36.928 09:12:11 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:36.928 09:12:11 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:45:36.928 09:12:11 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:45:36.928 09:12:11 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:45:36.928 09:12:11 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:45:36.928 09:12:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:45:36.928 09:12:11 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:45:36.928 09:12:11 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:45:36.928 09:12:11 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.928 09:12:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.928 09:12:12 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.928 09:12:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:45:36.928 09:12:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.928 09:12:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.928 09:12:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:36.928 09:12:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b9dc7c5f-a002-486f-bfe0-6354c8063496"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b9dc7c5f-a002-486f-bfe0-6354c8063496",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:45:37.186 09:12:12 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 172273 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 172273 ']' 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 172273 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172273 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172273' 00:45:37.186 killing process with pid 172273 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 172273 00:45:37.186 09:12:12 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 172273 00:45:39.736 09:12:14 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:45:39.736 09:12:14 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:45:39.736 09:12:14 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:45:39.736 09:12:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:39.736 09:12:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:39.736 ************************************ 00:45:39.736 START TEST bdev_hello_world 00:45:39.736 ************************************ 00:45:39.736 09:12:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:45:39.736 [2024-07-12 09:12:14.492569] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:39.736 [2024-07-12 09:12:14.493038] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172378 ] 00:45:39.736 [2024-07-12 09:12:14.663799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:39.736 [2024-07-12 09:12:14.877541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:40.304 [2024-07-12 09:12:15.314585] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:45:40.304 [2024-07-12 09:12:15.314875] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:45:40.304 [2024-07-12 09:12:15.315049] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:45:40.304 [2024-07-12 09:12:15.318156] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:45:40.304 [2024-07-12 09:12:15.318685] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:45:40.304 [2024-07-12 09:12:15.318839] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:45:40.304 [2024-07-12 09:12:15.319158] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:45:40.304 00:45:40.304 [2024-07-12 09:12:15.319329] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:45:41.237 ************************************ 00:45:41.237 END TEST bdev_hello_world 00:45:41.237 ************************************ 00:45:41.237 00:45:41.237 real 0m1.952s 00:45:41.237 user 0m1.587s 00:45:41.237 sys 0m0.264s 00:45:41.237 09:12:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:41.237 09:12:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:45:41.237 09:12:16 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:45:41.237 09:12:16 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:45:41.237 09:12:16 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:45:41.237 09:12:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:41.237 09:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:41.237 ************************************ 00:45:41.237 START TEST bdev_bounds 00:45:41.237 ************************************ 00:45:41.237 Process bdevio pid: 172422 00:45:41.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=172422 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 172422' 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 172422 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 172422 ']' 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:41.237 09:12:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:45:41.496 [2024-07-12 09:12:16.475079] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:41.496 [2024-07-12 09:12:16.475498] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172422 ] 00:45:41.496 [2024-07-12 09:12:16.647789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:41.753 [2024-07-12 09:12:16.864428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:41.753 [2024-07-12 09:12:16.864497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:45:41.753 [2024-07-12 09:12:16.864503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.320 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:42.320 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:45:42.320 09:12:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:45:42.579 I/O targets: 00:45:42.579 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:45:42.579 00:45:42.579 00:45:42.579 CUnit - A unit testing framework for C - Version 2.1-3 00:45:42.579 http://cunit.sourceforge.net/ 00:45:42.579 00:45:42.579 00:45:42.579 Suite: bdevio tests on: Nvme0n1 00:45:42.579 Test: blockdev write read block ...passed 00:45:42.579 Test: blockdev write zeroes read block ...passed 00:45:42.579 Test: blockdev write zeroes read no split ...passed 00:45:42.579 Test: blockdev write zeroes read split ...passed 00:45:42.579 Test: blockdev write zeroes read split partial ...passed 00:45:42.579 Test: blockdev reset ...[2024-07-12 09:12:17.637842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:45:42.579 [2024-07-12 09:12:17.642123] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:42.579 passed 00:45:42.579 Test: blockdev write read 8 blocks ...passed 00:45:42.579 Test: blockdev write read size > 128k ...passed 00:45:42.579 Test: blockdev write read invalid size ...passed 00:45:42.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:42.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:42.580 Test: blockdev write read max offset ...passed 00:45:42.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:42.580 Test: blockdev writev readv 8 blocks ...passed 00:45:42.580 Test: blockdev writev readv 30 x 1block ...passed 00:45:42.580 Test: blockdev writev readv block ...passed 00:45:42.580 Test: blockdev writev readv size > 128k ...passed 00:45:42.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:42.580 Test: blockdev comparev and writev ...[2024-07-12 09:12:17.651189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0xada0d000 len:0x1000 00:45:42.580 [2024-07-12 09:12:17.651660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:45:42.580 passed 00:45:42.580 Test: blockdev nvme passthru rw ...passed 00:45:42.580 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:12:17.653226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:45:42.580 [2024-07-12 09:12:17.653647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:45:42.580 passed 00:45:42.580 Test: blockdev nvme admin passthru ...passed 00:45:42.580 Test: blockdev copy ...passed 00:45:42.580 00:45:42.580 Run Summary: Type Total Ran Passed Failed Inactive 00:45:42.580 suites 1 1 n/a 0 0 00:45:42.580 tests 23 23 23 0 0 00:45:42.580 asserts 152 152 152 0 n/a 00:45:42.580 00:45:42.580 Elapsed time = 0.223 seconds 00:45:42.580 0 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 172422 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 172422 ']' 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 172422 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172422 00:45:42.580 killing process with pid 172422 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172422' 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 172422 00:45:42.580 09:12:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 172422 00:45:43.953 ************************************ 00:45:43.953 END TEST bdev_bounds 00:45:43.953 ************************************ 00:45:43.953 09:12:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:45:43.953 00:45:43.953 real 0m2.411s 00:45:43.953 user 0m5.647s 00:45:43.953 sys 0m0.390s 00:45:43.953 09:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:43.953 09:12:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:45:43.953 09:12:18 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:45:43.953 09:12:18 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:45:43.953 09:12:18 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:45:43.953 09:12:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:43.953 09:12:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:43.953 ************************************ 00:45:43.953 START TEST bdev_nbd 00:45:43.953 ************************************ 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=172482 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 172482 /var/tmp/spdk-nbd.sock 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 172482 ']' 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:45:43.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:43.953 09:12:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:45:43.953 [2024-07-12 09:12:18.955330] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:43.953 [2024-07-12 09:12:18.955780] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:43.953 [2024-07-12 09:12:19.136000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:44.211 [2024-07-12 09:12:19.352229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:45:44.776 09:12:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:45.341 1+0 records in 00:45:45.341 1+0 records out 00:45:45.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735229 s, 5.6 MB/s 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:45:45.341 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:45.598 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:45:45.598 { 00:45:45.598 "nbd_device": "/dev/nbd0", 00:45:45.598 "bdev_name": "Nvme0n1" 00:45:45.598 } 00:45:45.598 ]' 00:45:45.598 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:45:45.598 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:45:45.598 { 00:45:45.599 "nbd_device": "/dev/nbd0", 00:45:45.599 "bdev_name": "Nvme0n1" 00:45:45.599 } 00:45:45.599 ]' 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:45.599 09:12:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:45.856 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:46.422 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:45:46.679 /dev/nbd0 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:46.679 1+0 records in 00:45:46.679 1+0 records out 00:45:46.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729336 s, 5.6 MB/s 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:46.679 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:45:46.937 { 00:45:46.937 "nbd_device": "/dev/nbd0", 00:45:46.937 "bdev_name": "Nvme0n1" 00:45:46.937 } 00:45:46.937 ]' 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:45:46.937 { 00:45:46.937 "nbd_device": "/dev/nbd0", 00:45:46.937 "bdev_name": "Nvme0n1" 00:45:46.937 } 00:45:46.937 ]' 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:45:46.937 09:12:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:45:46.937 256+0 records in 00:45:46.937 256+0 records out 00:45:46.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00825956 s, 127 MB/s 00:45:46.937 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:45:46.938 256+0 records in 00:45:46.938 256+0 records out 00:45:46.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0614434 s, 17.1 MB/s 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:46.938 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:47.195 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:47.452 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:45:47.452 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:45:47.452 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:45:47.709 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:45:47.966 malloc_lvol_verify 00:45:47.966 09:12:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:45:48.223 a5bae3d4-4ab1-4be3-a7ee-df5c3302fe72 00:45:48.223 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:45:48.480 7a89de14-2679-4109-9354-46d94379e1f3 00:45:48.480 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:45:48.736 /dev/nbd0 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:45:48.736 mke2fs 1.45.5 (07-Jan-2020) 00:45:48.736 00:45:48.736 Filesystem too small for a journal 00:45:48.736 Creating filesystem with 1024 4k blocks and 1024 inodes 00:45:48.736 00:45:48.736 Allocating group tables: 0/1 done 00:45:48.736 Writing inode tables: 0/1 done 00:45:48.736 Writing superblocks and filesystem accounting information: 0/1 done 00:45:48.736 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:48.736 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 172482 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 172482 ']' 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 172482 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172482 00:45:48.993 killing process with pid 172482 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172482' 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 172482 00:45:48.993 09:12:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 172482 00:45:49.980 ************************************ 00:45:49.980 END TEST bdev_nbd 00:45:49.980 ************************************ 00:45:49.980 09:12:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:45:49.980 00:45:49.980 real 0m6.205s 00:45:49.980 user 0m9.111s 00:45:49.980 sys 0m1.296s 00:45:49.980 09:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:49.980 09:12:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:45:49.980 skipping fio tests on NVMe due to multi-ns failures. 00:45:49.980 09:12:25 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:45:49.980 09:12:25 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:45:49.980 09:12:25 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:45:49.980 09:12:25 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:45:49.980 09:12:25 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:45:49.980 09:12:25 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:45:49.980 09:12:25 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:45:49.980 09:12:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:49.980 09:12:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:49.980 ************************************ 00:45:49.980 START TEST bdev_verify 00:45:49.980 ************************************ 00:45:49.980 09:12:25 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:45:50.238 [2024-07-12 09:12:25.208582] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:50.238 [2024-07-12 09:12:25.209369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172702 ] 00:45:50.238 [2024-07-12 09:12:25.390472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:50.495 [2024-07-12 09:12:25.607359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:50.495 [2024-07-12 09:12:25.607359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:51.060 Running I/O for 5 seconds... 00:45:56.320 00:45:56.320 Latency(us) 00:45:56.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:56.320 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:56.320 Verification LBA range: start 0x0 length 0xa0000 00:45:56.320 Nvme0n1 : 5.01 10772.31 42.08 0.00 0.00 11818.56 763.35 19779.96 00:45:56.320 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:56.320 Verification LBA range: start 0xa0000 length 0xa0000 00:45:56.320 Nvme0n1 : 5.01 10730.09 41.91 0.00 0.00 11864.60 729.83 20971.52 00:45:56.320 =================================================================================================================== 00:45:56.320 Total : 21502.40 83.99 0.00 0.00 11841.54 729.83 20971.52 00:45:57.693 ************************************ 00:45:57.693 END TEST bdev_verify 00:45:57.693 ************************************ 00:45:57.693 00:45:57.693 real 0m7.317s 00:45:57.693 user 0m13.357s 00:45:57.693 sys 0m0.277s 00:45:57.693 09:12:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:57.693 09:12:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:45:57.693 09:12:32 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:45:57.693 09:12:32 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:45:57.693 09:12:32 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:45:57.693 09:12:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:57.693 09:12:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:45:57.693 ************************************ 00:45:57.693 START TEST bdev_verify_big_io 00:45:57.693 ************************************ 00:45:57.693 09:12:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:45:57.693 [2024-07-12 09:12:32.565640] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:45:57.693 [2024-07-12 09:12:32.566017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172805 ] 00:45:57.693 [2024-07-12 09:12:32.730425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:57.961 [2024-07-12 09:12:32.947995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:57.961 [2024-07-12 09:12:32.948000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:58.224 Running I/O for 5 seconds... 00:46:03.487 00:46:03.487 Latency(us) 00:46:03.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:03.487 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:46:03.487 Verification LBA range: start 0x0 length 0xa000 00:46:03.487 Nvme0n1 : 5.04 889.43 55.59 0.00 0.00 140342.74 1236.25 182070.92 00:46:03.487 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:46:03.487 Verification LBA range: start 0xa000 length 0xa000 00:46:03.487 Nvme0n1 : 5.05 915.53 57.22 0.00 0.00 136473.35 722.39 253564.74 00:46:03.487 =================================================================================================================== 00:46:03.487 Total : 1804.96 112.81 0.00 0.00 138378.49 722.39 253564.74 00:46:05.387 ************************************ 00:46:05.387 END TEST bdev_verify_big_io 00:46:05.387 ************************************ 00:46:05.387 00:46:05.387 real 0m7.783s 00:46:05.387 user 0m14.347s 00:46:05.387 sys 0m0.250s 00:46:05.387 09:12:40 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:05.387 09:12:40 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:46:05.387 09:12:40 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:46:05.387 09:12:40 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:05.387 09:12:40 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:46:05.387 09:12:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:05.387 09:12:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:05.387 ************************************ 00:46:05.387 START TEST bdev_write_zeroes 00:46:05.387 ************************************ 00:46:05.387 09:12:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:05.387 [2024-07-12 09:12:40.419965] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:05.387 [2024-07-12 09:12:40.420431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172919 ] 00:46:05.646 [2024-07-12 09:12:40.594226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:05.646 [2024-07-12 09:12:40.837463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:06.212 Running I/O for 1 seconds... 00:46:07.143 00:46:07.143 Latency(us) 00:46:07.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:07.143 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:07.143 Nvme0n1 : 1.00 47200.90 184.38 0.00 0.00 2704.40 1117.09 12332.68 00:46:07.143 =================================================================================================================== 00:46:07.143 Total : 47200.90 184.38 0.00 0.00 2704.40 1117.09 12332.68 00:46:08.517 ************************************ 00:46:08.517 END TEST bdev_write_zeroes 00:46:08.517 ************************************ 00:46:08.517 00:46:08.517 real 0m3.183s 00:46:08.517 user 0m2.834s 00:46:08.517 sys 0m0.248s 00:46:08.517 09:12:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:08.517 09:12:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:46:08.517 09:12:43 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:46:08.517 09:12:43 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:08.517 09:12:43 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:46:08.517 09:12:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:08.517 09:12:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:08.517 ************************************ 00:46:08.517 START TEST bdev_json_nonenclosed 00:46:08.517 ************************************ 00:46:08.517 09:12:43 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:08.517 [2024-07-12 09:12:43.657268] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:08.517 [2024-07-12 09:12:43.657756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172999 ] 00:46:08.775 [2024-07-12 09:12:43.831140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:09.034 [2024-07-12 09:12:44.046495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:09.034 [2024-07-12 09:12:44.047076] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:46:09.034 [2024-07-12 09:12:44.047299] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:09.034 [2024-07-12 09:12:44.047444] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:09.292 ************************************ 00:46:09.292 END TEST bdev_json_nonenclosed 00:46:09.292 ************************************ 00:46:09.292 00:46:09.292 real 0m0.858s 00:46:09.292 user 0m0.633s 00:46:09.292 sys 0m0.123s 00:46:09.292 09:12:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:46:09.292 09:12:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:09.292 09:12:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:46:09.551 09:12:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:46:09.551 09:12:44 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:46:09.551 09:12:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:09.551 09:12:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:46:09.551 09:12:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:09.551 09:12:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:09.551 ************************************ 00:46:09.551 START TEST bdev_json_nonarray 00:46:09.551 ************************************ 00:46:09.551 09:12:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:09.551 [2024-07-12 09:12:44.568050] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:09.551 [2024-07-12 09:12:44.568335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173029 ] 00:46:09.809 [2024-07-12 09:12:44.745140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:10.069 [2024-07-12 09:12:45.012047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:10.069 [2024-07-12 09:12:45.012423] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:46:10.069 [2024-07-12 09:12:45.012637] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:10.069 [2024-07-12 09:12:45.012793] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:10.327 ************************************ 00:46:10.327 END TEST bdev_json_nonarray 00:46:10.327 ************************************ 00:46:10.327 00:46:10.327 real 0m0.939s 00:46:10.327 user 0m0.673s 00:46:10.327 sys 0m0.165s 00:46:10.327 09:12:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:46:10.327 09:12:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:10.327 09:12:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:46:10.327 09:12:45 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:46:10.327 09:12:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:46:10.327 09:12:45 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:46:10.327 09:12:45 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:46:10.327 09:12:45 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:46:10.328 09:12:45 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:46:10.328 00:46:10.328 real 0m34.983s 00:46:10.328 user 0m52.524s 00:46:10.328 sys 0m3.676s 00:46:10.328 09:12:45 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:10.328 ************************************ 00:46:10.328 END TEST blockdev_nvme 00:46:10.328 ************************************ 00:46:10.328 09:12:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:46:10.585 09:12:45 -- common/autotest_common.sh@1142 -- # return 0 00:46:10.585 09:12:45 -- spdk/autotest.sh@213 -- # uname -s 00:46:10.585 09:12:45 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:46:10.585 09:12:45 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:46:10.585 09:12:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:46:10.586 09:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:10.586 09:12:45 -- common/autotest_common.sh@10 -- # set +x 00:46:10.586 ************************************ 00:46:10.586 START TEST blockdev_nvme_gpt 00:46:10.586 ************************************ 00:46:10.586 09:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:46:10.586 * Looking for test storage... 00:46:10.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=173117 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 173117 00:46:10.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:10.586 09:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 173117 ']' 00:46:10.586 09:12:45 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:46:10.586 09:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:10.586 09:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:10.586 09:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:10.586 09:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:10.586 09:12:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:10.586 [2024-07-12 09:12:45.709663] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:10.586 [2024-07-12 09:12:45.709908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173117 ] 00:46:10.844 [2024-07-12 09:12:45.883832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:11.103 [2024-07-12 09:12:46.119472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:12.039 09:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:12.039 09:12:46 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:46:12.039 09:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:46:12.039 09:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:46:12.039 09:12:46 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:12.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:46:12.039 Waiting for block devices as requested 00:46:12.298 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:12.298 09:12:47 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:46:12.298 BYT; 00:46:12.298 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:46:12.298 BYT; 00:46:12.298 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:46:12.298 09:12:47 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:46:13.233 09:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:46:13.233 09:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:46:13.233 09:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:46:13.233 09:12:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:46:13.233 09:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:46:13.233 09:12:48 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:46:14.167 The operation has completed successfully. 00:46:14.167 09:12:49 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:46:15.101 The operation has completed successfully. 00:46:15.101 09:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:15.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:46:15.667 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:16.602 [] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:16.602 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:46:16.602 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:46:16.860 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:46:16.860 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:46:16.860 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:46:16.860 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 173117 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 173117 ']' 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 173117 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173117 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173117' 00:46:16.861 killing process with pid 173117 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 173117 00:46:16.861 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 173117 00:46:19.392 09:12:54 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:46:19.392 09:12:54 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:46:19.392 09:12:54 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:46:19.392 09:12:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:19.392 09:12:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:19.392 ************************************ 00:46:19.392 START TEST bdev_hello_world 00:46:19.392 ************************************ 00:46:19.392 09:12:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:46:19.392 [2024-07-12 09:12:54.104901] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:19.392 [2024-07-12 09:12:54.105129] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173670 ] 00:46:19.392 [2024-07-12 09:12:54.275172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:19.392 [2024-07-12 09:12:54.495532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:19.958 [2024-07-12 09:12:54.932668] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:46:19.959 [2024-07-12 09:12:54.932770] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:46:19.959 [2024-07-12 09:12:54.932813] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:46:19.959 [2024-07-12 09:12:54.935827] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:46:19.959 [2024-07-12 09:12:54.936366] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:46:19.959 [2024-07-12 09:12:54.936422] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:46:19.959 [2024-07-12 09:12:54.936670] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:46:19.959 00:46:19.959 [2024-07-12 09:12:54.936747] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:46:20.892 00:46:20.892 real 0m2.039s 00:46:20.892 user 0m1.667s 00:46:20.892 sys 0m0.272s 00:46:20.892 09:12:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:20.892 09:12:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:46:20.892 ************************************ 00:46:20.892 END TEST bdev_hello_world 00:46:20.892 ************************************ 00:46:21.149 09:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:46:21.149 09:12:56 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:46:21.149 09:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:46:21.149 09:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:21.149 09:12:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:21.149 ************************************ 00:46:21.149 START TEST bdev_bounds 00:46:21.149 ************************************ 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=173720 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:46:21.149 Process bdevio pid: 173720 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 173720' 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 173720 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 173720 ']' 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:21.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:21.149 09:12:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:46:21.149 [2024-07-12 09:12:56.184571] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:21.149 [2024-07-12 09:12:56.184762] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173720 ] 00:46:21.407 [2024-07-12 09:12:56.354554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:21.407 [2024-07-12 09:12:56.573621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:21.407 [2024-07-12 09:12:56.573698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:46:21.407 [2024-07-12 09:12:56.573703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:21.972 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:21.972 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:46:21.972 09:12:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:46:22.230 I/O targets: 00:46:22.230 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:46:22.230 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:46:22.230 00:46:22.230 00:46:22.230 CUnit - A unit testing framework for C - Version 2.1-3 00:46:22.230 http://cunit.sourceforge.net/ 00:46:22.230 00:46:22.230 00:46:22.230 Suite: bdevio tests on: Nvme0n1p2 00:46:22.230 Test: blockdev write read block ...passed 00:46:22.230 Test: blockdev write zeroes read block ...passed 00:46:22.230 Test: blockdev write zeroes read no split ...passed 00:46:22.230 Test: blockdev write zeroes read split ...passed 00:46:22.230 Test: blockdev write zeroes read split partial ...passed 00:46:22.230 Test: blockdev reset ...[2024-07-12 09:12:57.289571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:46:22.230 [2024-07-12 09:12:57.293384] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:22.230 passed 00:46:22.230 Test: blockdev write read 8 blocks ...passed 00:46:22.230 Test: blockdev write read size > 128k ...passed 00:46:22.230 Test: blockdev write read invalid size ...passed 00:46:22.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:46:22.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:46:22.230 Test: blockdev write read max offset ...passed 00:46:22.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:46:22.230 Test: blockdev writev readv 8 blocks ...passed 00:46:22.230 Test: blockdev writev readv 30 x 1block ...passed 00:46:22.230 Test: blockdev writev readv block ...passed 00:46:22.230 Test: blockdev writev readv size > 128k ...passed 00:46:22.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:46:22.230 Test: blockdev comparev and writev ...[2024-07-12 09:12:57.300760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x92c0d000 len:0x1000 00:46:22.230 [2024-07-12 09:12:57.300852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:46:22.230 passed 00:46:22.230 Test: blockdev nvme passthru rw ...passed 00:46:22.230 Test: blockdev nvme passthru vendor specific ...passed 00:46:22.230 Test: blockdev nvme admin passthru ...passed 00:46:22.230 Test: blockdev copy ...passed 00:46:22.230 Suite: bdevio tests on: Nvme0n1p1 00:46:22.230 Test: blockdev write read block ...passed 00:46:22.230 Test: blockdev write zeroes read block ...passed 00:46:22.230 Test: blockdev write zeroes read no split ...passed 00:46:22.230 Test: blockdev write zeroes read split ...passed 00:46:22.230 Test: blockdev write zeroes read split partial ...passed 00:46:22.230 Test: blockdev reset ...[2024-07-12 09:12:57.353958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:46:22.230 [2024-07-12 09:12:57.357484] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:22.230 passed 00:46:22.230 Test: blockdev write read 8 blocks ...passed 00:46:22.230 Test: blockdev write read size > 128k ...passed 00:46:22.230 Test: blockdev write read invalid size ...passed 00:46:22.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:46:22.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:46:22.230 Test: blockdev write read max offset ...passed 00:46:22.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:46:22.230 Test: blockdev writev readv 8 blocks ...passed 00:46:22.230 Test: blockdev writev readv 30 x 1block ...passed 00:46:22.230 Test: blockdev writev readv block ...passed 00:46:22.230 Test: blockdev writev readv size > 128k ...passed 00:46:22.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:46:22.230 Test: blockdev comparev and writev ...[2024-07-12 09:12:57.364939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x92c09000 len:0x1000 00:46:22.230 [2024-07-12 09:12:57.365049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:46:22.230 passed 00:46:22.230 Test: blockdev nvme passthru rw ...passed 00:46:22.231 Test: blockdev nvme passthru vendor specific ...passed 00:46:22.231 Test: blockdev nvme admin passthru ...passed 00:46:22.231 Test: blockdev copy ...passed 00:46:22.231 00:46:22.231 Run Summary: Type Total Ran Passed Failed Inactive 00:46:22.231 suites 2 2 n/a 0 0 00:46:22.231 tests 46 46 46 0 0 00:46:22.231 asserts 284 284 284 0 n/a 00:46:22.231 00:46:22.231 Elapsed time = 0.361 seconds 00:46:22.231 0 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 173720 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 173720 ']' 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 173720 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173720 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:22.231 killing process with pid 173720 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173720' 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 173720 00:46:22.231 09:12:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 173720 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:46:23.606 00:46:23.606 real 0m2.287s 00:46:23.606 user 0m5.311s 00:46:23.606 sys 0m0.350s 00:46:23.606 ************************************ 00:46:23.606 END TEST bdev_bounds 00:46:23.606 ************************************ 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:46:23.606 09:12:58 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:46:23.606 09:12:58 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:46:23.606 09:12:58 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:46:23.606 09:12:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:23.606 09:12:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:23.606 ************************************ 00:46:23.606 START TEST bdev_nbd 00:46:23.606 ************************************ 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:46:23.606 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=173777 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 173777 /var/tmp/spdk-nbd.sock 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 173777 ']' 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:46:23.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:23.607 09:12:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:46:23.607 [2024-07-12 09:12:58.526131] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:23.607 [2024-07-12 09:12:58.526335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:23.607 [2024-07-12 09:12:58.688125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:23.865 [2024-07-12 09:12:58.906492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:46:24.432 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:24.690 1+0 records in 00:46:24.690 1+0 records out 00:46:24.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820149 s, 5.0 MB/s 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:46:24.690 09:12:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:24.948 1+0 records in 00:46:24.948 1+0 records out 00:46:24.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434013 s, 9.4 MB/s 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:46:24.948 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:46:25.206 { 00:46:25.206 "nbd_device": "/dev/nbd0", 00:46:25.206 "bdev_name": "Nvme0n1p1" 00:46:25.206 }, 00:46:25.206 { 00:46:25.206 "nbd_device": "/dev/nbd1", 00:46:25.206 "bdev_name": "Nvme0n1p2" 00:46:25.206 } 00:46:25.206 ]' 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:46:25.206 { 00:46:25.206 "nbd_device": "/dev/nbd0", 00:46:25.206 "bdev_name": "Nvme0n1p1" 00:46:25.206 }, 00:46:25.206 { 00:46:25.206 "nbd_device": "/dev/nbd1", 00:46:25.206 "bdev_name": "Nvme0n1p2" 00:46:25.206 } 00:46:25.206 ]' 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:25.206 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:25.772 09:13:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:26.044 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:46:26.045 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:46:26.045 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:46:26.389 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:46:26.390 /dev/nbd0 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:26.390 1+0 records in 00:46:26.390 1+0 records out 00:46:26.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641073 s, 6.4 MB/s 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:26.390 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:46:26.650 /dev/nbd1 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:26.651 1+0 records in 00:46:26.651 1+0 records out 00:46:26.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539778 s, 7.6 MB/s 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:46:26.651 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:26.910 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:46:26.910 09:13:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:46:26.910 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:26.910 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:26.910 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:26.910 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:26.910 09:13:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:46:27.167 { 00:46:27.167 "nbd_device": "/dev/nbd0", 00:46:27.167 "bdev_name": "Nvme0n1p1" 00:46:27.167 }, 00:46:27.167 { 00:46:27.167 "nbd_device": "/dev/nbd1", 00:46:27.167 "bdev_name": "Nvme0n1p2" 00:46:27.167 } 00:46:27.167 ]' 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:46:27.167 { 00:46:27.167 "nbd_device": "/dev/nbd0", 00:46:27.167 "bdev_name": "Nvme0n1p1" 00:46:27.167 }, 00:46:27.167 { 00:46:27.167 "nbd_device": "/dev/nbd1", 00:46:27.167 "bdev_name": "Nvme0n1p2" 00:46:27.167 } 00:46:27.167 ]' 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:46:27.167 /dev/nbd1' 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:46:27.167 /dev/nbd1' 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:46:27.167 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:27.168 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:46:27.168 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:46:27.168 256+0 records in 00:46:27.168 256+0 records out 00:46:27.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00791723 s, 132 MB/s 00:46:27.168 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:46:27.168 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:46:27.168 256+0 records in 00:46:27.168 256+0 records out 00:46:27.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0835342 s, 12.6 MB/s 00:46:27.168 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:46:27.168 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:46:27.426 256+0 records in 00:46:27.426 256+0 records out 00:46:27.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0861607 s, 12.2 MB/s 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:27.426 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:27.685 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:27.943 09:13:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:46:28.201 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:46:28.458 malloc_lvol_verify 00:46:28.458 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:46:29.024 ef0aec59-3310-4d4c-bf3e-6e118a28d8ca 00:46:29.024 09:13:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:46:29.024 8877f173-aa42-43f0-9dfc-03f3d4bd0918 00:46:29.024 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:46:29.281 /dev/nbd0 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:46:29.281 mke2fs 1.45.5 (07-Jan-2020) 00:46:29.281 00:46:29.281 Filesystem too small for a journal 00:46:29.281 Creating filesystem with 1024 4k blocks and 1024 inodes 00:46:29.281 00:46:29.281 Allocating group tables: 0/1 done 00:46:29.281 Writing inode tables: 0/1 done 00:46:29.281 Writing superblocks and filesystem accounting information: 0/1 done 00:46:29.281 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:29.281 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 173777 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 173777 ']' 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 173777 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173777 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:29.540 killing process with pid 173777 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173777' 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 173777 00:46:29.540 09:13:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 173777 00:46:30.959 09:13:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:46:30.959 00:46:30.959 real 0m7.433s 00:46:30.959 user 0m10.942s 00:46:30.959 sys 0m1.705s 00:46:30.959 09:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:30.959 09:13:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:46:30.959 ************************************ 00:46:30.959 END TEST bdev_nbd 00:46:30.959 ************************************ 00:46:30.959 09:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:46:30.959 skipping fio tests on NVMe due to multi-ns failures. 00:46:30.959 09:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:46:30.959 09:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:46:30.959 09:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:46:30.959 09:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:46:30.959 09:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:46:30.959 09:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:46:30.959 09:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:46:30.959 09:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:30.959 09:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:30.959 ************************************ 00:46:30.959 START TEST bdev_verify 00:46:30.959 ************************************ 00:46:30.959 09:13:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:46:30.959 [2024-07-12 09:13:06.004593] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:30.959 [2024-07-12 09:13:06.005412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174056 ] 00:46:31.217 [2024-07-12 09:13:06.171844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:31.217 [2024-07-12 09:13:06.402227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:31.217 [2024-07-12 09:13:06.402224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:31.782 Running I/O for 5 seconds... 00:46:37.043 00:46:37.043 Latency(us) 00:46:37.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:37.043 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:37.043 Verification LBA range: start 0x0 length 0x4ff80 00:46:37.043 Nvme0n1p1 : 5.01 4726.59 18.46 0.00 0.00 26983.77 4587.52 31218.97 00:46:37.043 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:37.043 Verification LBA range: start 0x4ff80 length 0x4ff80 00:46:37.043 Nvme0n1p1 : 5.03 4583.83 17.91 0.00 0.00 27771.75 1526.69 61484.68 00:46:37.043 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:37.043 Verification LBA range: start 0x0 length 0x4ff7f 00:46:37.043 Nvme0n1p2 : 5.03 4736.58 18.50 0.00 0.00 26875.52 2993.80 29669.93 00:46:37.043 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:37.043 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:46:37.043 Nvme0n1p2 : 5.01 4568.74 17.85 0.00 0.00 27918.53 5153.51 60531.43 00:46:37.043 =================================================================================================================== 00:46:37.043 Total : 18615.74 72.72 0.00 0.00 27379.69 1526.69 61484.68 00:46:38.425 00:46:38.425 real 0m7.264s 00:46:38.425 user 0m13.276s 00:46:38.425 sys 0m0.281s 00:46:38.425 ************************************ 00:46:38.425 END TEST bdev_verify 00:46:38.425 ************************************ 00:46:38.425 09:13:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:38.425 09:13:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:46:38.425 09:13:13 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:46:38.425 09:13:13 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:46:38.425 09:13:13 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:46:38.425 09:13:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:38.425 09:13:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:38.425 ************************************ 00:46:38.425 START TEST bdev_verify_big_io 00:46:38.425 ************************************ 00:46:38.425 09:13:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:46:38.425 [2024-07-12 09:13:13.322041] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:38.425 [2024-07-12 09:13:13.322923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174152 ] 00:46:38.425 [2024-07-12 09:13:13.497784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:38.682 [2024-07-12 09:13:13.715295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:38.682 [2024-07-12 09:13:13.715294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:39.269 Running I/O for 5 seconds... 00:46:44.574 00:46:44.574 Latency(us) 00:46:44.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:44.574 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:46:44.574 Verification LBA range: start 0x0 length 0x4ff8 00:46:44.574 Nvme0n1p1 : 5.26 438.34 27.40 0.00 0.00 287531.36 5064.15 343170.33 00:46:44.574 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:46:44.574 Verification LBA range: start 0x4ff8 length 0x4ff8 00:46:44.574 Nvme0n1p1 : 5.24 414.91 25.93 0.00 0.00 302712.05 6434.44 297414.28 00:46:44.574 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:46:44.574 Verification LBA range: start 0x0 length 0x4ff7 00:46:44.574 Nvme0n1p2 : 5.26 437.38 27.34 0.00 0.00 281492.11 2904.44 346983.33 00:46:44.574 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:46:44.574 Verification LBA range: start 0x4ff7 length 0x4ff7 00:46:44.574 Nvme0n1p2 : 5.25 414.62 25.91 0.00 0.00 294380.57 3634.27 320292.31 00:46:44.574 =================================================================================================================== 00:46:44.574 Total : 1705.25 106.58 0.00 0.00 291332.92 2904.44 346983.33 00:46:45.946 00:46:45.946 real 0m7.827s 00:46:45.946 user 0m14.339s 00:46:45.946 sys 0m0.345s 00:46:45.947 09:13:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:45.947 09:13:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:46:45.947 ************************************ 00:46:45.947 END TEST bdev_verify_big_io 00:46:45.947 ************************************ 00:46:45.947 09:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:46:45.947 09:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:45.947 09:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:46:45.947 09:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:45.947 09:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:45.947 ************************************ 00:46:45.947 START TEST bdev_write_zeroes 00:46:45.947 ************************************ 00:46:45.947 09:13:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:46.204 [2024-07-12 09:13:21.195251] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:46.204 [2024-07-12 09:13:21.195466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174265 ] 00:46:46.204 [2024-07-12 09:13:21.357147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:46.462 [2024-07-12 09:13:21.573956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:47.027 Running I/O for 1 seconds... 00:46:47.964 00:46:47.964 Latency(us) 00:46:47.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:47.964 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:47.964 Nvme0n1p1 : 1.01 25217.53 98.51 0.00 0.00 5062.69 2859.75 13285.93 00:46:47.964 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:47.964 Nvme0n1p2 : 1.01 25235.36 98.58 0.00 0.00 5055.13 2234.18 12332.68 00:46:47.964 =================================================================================================================== 00:46:47.964 Total : 50452.90 197.08 0.00 0.00 5058.91 2234.18 13285.93 00:46:48.897 00:46:48.897 real 0m2.917s 00:46:48.897 user 0m2.580s 00:46:48.897 sys 0m0.237s 00:46:48.897 09:13:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:48.897 09:13:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:46:48.897 ************************************ 00:46:48.897 END TEST bdev_write_zeroes 00:46:48.897 ************************************ 00:46:49.156 09:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:46:49.156 09:13:24 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:49.156 09:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:46:49.156 09:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:49.156 09:13:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:49.156 ************************************ 00:46:49.156 START TEST bdev_json_nonenclosed 00:46:49.156 ************************************ 00:46:49.156 09:13:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:49.156 [2024-07-12 09:13:24.174945] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:49.156 [2024-07-12 09:13:24.175437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174322 ] 00:46:49.156 [2024-07-12 09:13:24.342460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:49.413 [2024-07-12 09:13:24.565334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:49.413 [2024-07-12 09:13:24.565445] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:46:49.413 [2024-07-12 09:13:24.565501] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:49.413 [2024-07-12 09:13:24.565529] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:49.978 00:46:49.978 real 0m0.861s 00:46:49.978 user 0m0.611s 00:46:49.978 sys 0m0.149s 00:46:49.978 09:13:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:46:49.978 09:13:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:49.978 09:13:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:46:49.978 ************************************ 00:46:49.978 END TEST bdev_json_nonenclosed 00:46:49.978 ************************************ 00:46:49.978 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:46:49.978 09:13:25 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:46:49.978 09:13:25 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:49.978 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:46:49.978 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:49.978 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:49.978 ************************************ 00:46:49.978 START TEST bdev_json_nonarray 00:46:49.978 ************************************ 00:46:49.978 09:13:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:49.978 [2024-07-12 09:13:25.075542] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:49.978 [2024-07-12 09:13:25.075740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174378 ] 00:46:50.235 [2024-07-12 09:13:25.246101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:50.492 [2024-07-12 09:13:25.485004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:50.492 [2024-07-12 09:13:25.485156] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:46:50.492 [2024-07-12 09:13:25.485215] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:50.492 [2024-07-12 09:13:25.485244] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:50.749 00:46:50.749 real 0m0.871s 00:46:50.749 user 0m0.626s 00:46:50.749 sys 0m0.144s 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:46:50.749 ************************************ 00:46:50.749 END TEST bdev_json_nonarray 00:46:50.749 ************************************ 00:46:50.749 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:46:50.749 09:13:25 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:46:50.749 09:13:25 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:46:50.749 09:13:25 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:46:50.749 09:13:25 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:46:50.749 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:50.749 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:50.749 09:13:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:50.749 ************************************ 00:46:50.749 START TEST bdev_gpt_uuid 00:46:50.749 ************************************ 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=174410 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 174410 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 174410 ']' 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:50.749 09:13:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:46:51.006 [2024-07-12 09:13:26.006994] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:51.007 [2024-07-12 09:13:26.007419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174410 ] 00:46:51.007 [2024-07-12 09:13:26.170763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.263 [2024-07-12 09:13:26.384582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:46:52.197 Some configs were skipped because the RPC state that can call them passed over. 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:46:52.197 { 00:46:52.197 "name": "Nvme0n1p1", 00:46:52.197 "aliases": [ 00:46:52.197 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:46:52.197 ], 00:46:52.197 "product_name": "GPT Disk", 00:46:52.197 "block_size": 4096, 00:46:52.197 "num_blocks": 655104, 00:46:52.197 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:46:52.197 "assigned_rate_limits": { 00:46:52.197 "rw_ios_per_sec": 0, 00:46:52.197 "rw_mbytes_per_sec": 0, 00:46:52.197 "r_mbytes_per_sec": 0, 00:46:52.197 "w_mbytes_per_sec": 0 00:46:52.197 }, 00:46:52.197 "claimed": false, 00:46:52.197 "zoned": false, 00:46:52.197 "supported_io_types": { 00:46:52.197 "read": true, 00:46:52.197 "write": true, 00:46:52.197 "unmap": true, 00:46:52.197 "flush": true, 00:46:52.197 "reset": true, 00:46:52.197 "nvme_admin": false, 00:46:52.197 "nvme_io": false, 00:46:52.197 "nvme_io_md": false, 00:46:52.197 "write_zeroes": true, 00:46:52.197 "zcopy": false, 00:46:52.197 "get_zone_info": false, 00:46:52.197 "zone_management": false, 00:46:52.197 "zone_append": false, 00:46:52.197 "compare": true, 00:46:52.197 "compare_and_write": false, 00:46:52.197 "abort": true, 00:46:52.197 "seek_hole": false, 00:46:52.197 "seek_data": false, 00:46:52.197 "copy": true, 00:46:52.197 "nvme_iov_md": false 00:46:52.197 }, 00:46:52.197 "driver_specific": { 00:46:52.197 "gpt": { 00:46:52.197 "base_bdev": "Nvme0n1", 00:46:52.197 "offset_blocks": 256, 00:46:52.197 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:46:52.197 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:46:52.197 "partition_name": "SPDK_TEST_first" 00:46:52.197 } 00:46:52.197 } 00:46:52.197 } 00:46:52.197 ]' 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:46:52.197 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:46:52.454 { 00:46:52.454 "name": "Nvme0n1p2", 00:46:52.454 "aliases": [ 00:46:52.454 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:46:52.454 ], 00:46:52.454 "product_name": "GPT Disk", 00:46:52.454 "block_size": 4096, 00:46:52.454 "num_blocks": 655103, 00:46:52.454 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:46:52.454 "assigned_rate_limits": { 00:46:52.454 "rw_ios_per_sec": 0, 00:46:52.454 "rw_mbytes_per_sec": 0, 00:46:52.454 "r_mbytes_per_sec": 0, 00:46:52.454 "w_mbytes_per_sec": 0 00:46:52.454 }, 00:46:52.454 "claimed": false, 00:46:52.454 "zoned": false, 00:46:52.454 "supported_io_types": { 00:46:52.454 "read": true, 00:46:52.454 "write": true, 00:46:52.454 "unmap": true, 00:46:52.454 "flush": true, 00:46:52.454 "reset": true, 00:46:52.454 "nvme_admin": false, 00:46:52.454 "nvme_io": false, 00:46:52.454 "nvme_io_md": false, 00:46:52.454 "write_zeroes": true, 00:46:52.454 "zcopy": false, 00:46:52.454 "get_zone_info": false, 00:46:52.454 "zone_management": false, 00:46:52.454 "zone_append": false, 00:46:52.454 "compare": true, 00:46:52.454 "compare_and_write": false, 00:46:52.454 "abort": true, 00:46:52.454 "seek_hole": false, 00:46:52.454 "seek_data": false, 00:46:52.454 "copy": true, 00:46:52.454 "nvme_iov_md": false 00:46:52.454 }, 00:46:52.454 "driver_specific": { 00:46:52.454 "gpt": { 00:46:52.454 "base_bdev": "Nvme0n1", 00:46:52.454 "offset_blocks": 655360, 00:46:52.454 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:46:52.454 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:46:52.454 "partition_name": "SPDK_TEST_second" 00:46:52.454 } 00:46:52.454 } 00:46:52.454 } 00:46:52.454 ]' 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:46:52.454 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 174410 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 174410 ']' 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 174410 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174410 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:52.712 killing process with pid 174410 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174410' 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 174410 00:46:52.712 09:13:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 174410 00:46:55.236 00:46:55.236 real 0m3.908s 00:46:55.236 user 0m4.251s 00:46:55.236 sys 0m0.470s 00:46:55.236 09:13:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:55.236 09:13:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:46:55.236 ************************************ 00:46:55.236 END TEST bdev_gpt_uuid 00:46:55.236 ************************************ 00:46:55.236 09:13:29 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:46:55.236 09:13:29 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:55.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:46:55.236 Waiting for block devices as requested 00:46:55.236 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:55.236 09:13:30 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:46:55.236 09:13:30 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:46:55.236 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:46:55.236 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:46:55.236 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:46:55.236 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:46:55.236 09:13:30 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:46:55.236 00:46:55.236 real 0m44.815s 00:46:55.236 user 1m3.167s 00:46:55.236 sys 0m6.172s 00:46:55.236 09:13:30 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:55.236 ************************************ 00:46:55.236 END TEST blockdev_nvme_gpt 00:46:55.236 ************************************ 00:46:55.236 09:13:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:46:55.237 09:13:30 -- common/autotest_common.sh@1142 -- # return 0 00:46:55.237 09:13:30 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:46:55.237 09:13:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:55.237 09:13:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:55.237 09:13:30 -- common/autotest_common.sh@10 -- # set +x 00:46:55.237 ************************************ 00:46:55.237 START TEST nvme 00:46:55.237 ************************************ 00:46:55.237 09:13:30 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:46:55.494 * Looking for test storage... 00:46:55.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:46:55.494 09:13:30 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:55.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:46:55.753 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:57.124 09:13:31 nvme -- nvme/nvme.sh@79 -- # uname 00:46:57.124 09:13:31 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:46:57.124 09:13:31 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:46:57.124 09:13:31 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1069 -- # stubpid=174855 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:46:57.124 Waiting for stub to ready for secondary processes... 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/174855 ]] 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:46:57.124 09:13:31 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:46:57.124 [2024-07-12 09:13:31.996616] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:46:57.124 [2024-07-12 09:13:31.996964] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:46:58.056 09:13:32 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:46:58.056 09:13:32 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/174855 ]] 00:46:58.056 09:13:32 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:46:58.056 [2024-07-12 09:13:33.243313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:58.312 [2024-07-12 09:13:33.457894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:46:58.312 [2024-07-12 09:13:33.457967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:46:58.312 [2024-07-12 09:13:33.457973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:58.312 [2024-07-12 09:13:33.467540] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:46:58.312 [2024-07-12 09:13:33.467627] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:46:58.312 [2024-07-12 09:13:33.474650] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:46:58.312 [2024-07-12 09:13:33.474816] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:46:58.876 09:13:33 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:46:58.876 09:13:33 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:46:58.876 done. 00:46:58.876 09:13:33 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:46:58.876 09:13:33 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:46:58.876 09:13:33 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:58.876 09:13:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:46:58.876 ************************************ 00:46:58.876 START TEST nvme_reset 00:46:58.876 ************************************ 00:46:58.876 09:13:33 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:46:59.133 Initializing NVMe Controllers 00:46:59.133 Skipping QEMU NVMe SSD at 0000:00:10.0 00:46:59.133 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:46:59.133 00:46:59.133 real 0m0.308s 00:46:59.133 user 0m0.077s 00:46:59.133 sys 0m0.160s 00:46:59.133 09:13:34 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:59.133 ************************************ 00:46:59.133 END TEST nvme_reset 00:46:59.133 ************************************ 00:46:59.133 09:13:34 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:46:59.133 09:13:34 nvme -- common/autotest_common.sh@1142 -- # return 0 00:46:59.133 09:13:34 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:46:59.133 09:13:34 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:59.133 09:13:34 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:59.133 09:13:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:46:59.133 ************************************ 00:46:59.133 START TEST nvme_identify 00:46:59.133 ************************************ 00:46:59.133 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:46:59.133 09:13:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:46:59.133 09:13:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:46:59.133 09:13:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:46:59.133 09:13:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:46:59.133 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:46:59.133 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:46:59.133 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:59.390 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:59.390 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:46:59.390 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:46:59.390 09:13:34 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:46:59.390 09:13:34 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:46:59.648 [2024-07-12 09:13:34.651889] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 174890 terminated unexpected 00:46:59.648 ===================================================== 00:46:59.648 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:46:59.648 ===================================================== 00:46:59.648 Controller Capabilities/Features 00:46:59.648 ================================ 00:46:59.648 Vendor ID: 1b36 00:46:59.648 Subsystem Vendor ID: 1af4 00:46:59.648 Serial Number: 12340 00:46:59.648 Model Number: QEMU NVMe Ctrl 00:46:59.648 Firmware Version: 8.0.0 00:46:59.648 Recommended Arb Burst: 6 00:46:59.648 IEEE OUI Identifier: 00 54 52 00:46:59.648 Multi-path I/O 00:46:59.648 May have multiple subsystem ports: No 00:46:59.648 May have multiple controllers: No 00:46:59.648 Associated with SR-IOV VF: No 00:46:59.648 Max Data Transfer Size: 524288 00:46:59.648 Max Number of Namespaces: 256 00:46:59.648 Max Number of I/O Queues: 64 00:46:59.648 NVMe Specification Version (VS): 1.4 00:46:59.648 NVMe Specification Version (Identify): 1.4 00:46:59.648 Maximum Queue Entries: 2048 00:46:59.648 Contiguous Queues Required: Yes 00:46:59.648 Arbitration Mechanisms Supported 00:46:59.648 Weighted Round Robin: Not Supported 00:46:59.648 Vendor Specific: Not Supported 00:46:59.648 Reset Timeout: 7500 ms 00:46:59.648 Doorbell Stride: 4 bytes 00:46:59.648 NVM Subsystem Reset: Not Supported 00:46:59.648 Command Sets Supported 00:46:59.648 NVM Command Set: Supported 00:46:59.648 Boot Partition: Not Supported 00:46:59.648 Memory Page Size Minimum: 4096 bytes 00:46:59.648 Memory Page Size Maximum: 65536 bytes 00:46:59.648 Persistent Memory Region: Not Supported 00:46:59.648 Optional Asynchronous Events Supported 00:46:59.648 Namespace Attribute Notices: Supported 00:46:59.648 Firmware Activation Notices: Not Supported 00:46:59.648 ANA Change Notices: Not Supported 00:46:59.648 PLE Aggregate Log Change Notices: Not Supported 00:46:59.648 LBA Status Info Alert Notices: Not Supported 00:46:59.648 EGE Aggregate Log Change Notices: Not Supported 00:46:59.648 Normal NVM Subsystem Shutdown event: Not Supported 00:46:59.648 Zone Descriptor Change Notices: Not Supported 00:46:59.648 Discovery Log Change Notices: Not Supported 00:46:59.648 Controller Attributes 00:46:59.648 128-bit Host Identifier: Not Supported 00:46:59.648 Non-Operational Permissive Mode: Not Supported 00:46:59.648 NVM Sets: Not Supported 00:46:59.648 Read Recovery Levels: Not Supported 00:46:59.648 Endurance Groups: Not Supported 00:46:59.648 Predictable Latency Mode: Not Supported 00:46:59.648 Traffic Based Keep ALive: Not Supported 00:46:59.648 Namespace Granularity: Not Supported 00:46:59.648 SQ Associations: Not Supported 00:46:59.648 UUID List: Not Supported 00:46:59.648 Multi-Domain Subsystem: Not Supported 00:46:59.648 Fixed Capacity Management: Not Supported 00:46:59.648 Variable Capacity Management: Not Supported 00:46:59.648 Delete Endurance Group: Not Supported 00:46:59.648 Delete NVM Set: Not Supported 00:46:59.648 Extended LBA Formats Supported: Supported 00:46:59.648 Flexible Data Placement Supported: Not Supported 00:46:59.648 00:46:59.648 Controller Memory Buffer Support 00:46:59.648 ================================ 00:46:59.648 Supported: No 00:46:59.648 00:46:59.648 Persistent Memory Region Support 00:46:59.648 ================================ 00:46:59.648 Supported: No 00:46:59.648 00:46:59.648 Admin Command Set Attributes 00:46:59.648 ============================ 00:46:59.648 Security Send/Receive: Not Supported 00:46:59.648 Format NVM: Supported 00:46:59.648 Firmware Activate/Download: Not Supported 00:46:59.648 Namespace Management: Supported 00:46:59.648 Device Self-Test: Not Supported 00:46:59.648 Directives: Supported 00:46:59.648 NVMe-MI: Not Supported 00:46:59.648 Virtualization Management: Not Supported 00:46:59.648 Doorbell Buffer Config: Supported 00:46:59.648 Get LBA Status Capability: Not Supported 00:46:59.648 Command & Feature Lockdown Capability: Not Supported 00:46:59.648 Abort Command Limit: 4 00:46:59.648 Async Event Request Limit: 4 00:46:59.648 Number of Firmware Slots: N/A 00:46:59.648 Firmware Slot 1 Read-Only: N/A 00:46:59.648 Firmware Activation Without Reset: N/A 00:46:59.648 Multiple Update Detection Support: N/A 00:46:59.648 Firmware Update Granularity: No Information Provided 00:46:59.648 Per-Namespace SMART Log: Yes 00:46:59.648 Asymmetric Namespace Access Log Page: Not Supported 00:46:59.648 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:46:59.648 Command Effects Log Page: Supported 00:46:59.648 Get Log Page Extended Data: Supported 00:46:59.648 Telemetry Log Pages: Not Supported 00:46:59.648 Persistent Event Log Pages: Not Supported 00:46:59.648 Supported Log Pages Log Page: May Support 00:46:59.648 Commands Supported & Effects Log Page: Not Supported 00:46:59.648 Feature Identifiers & Effects Log Page:May Support 00:46:59.648 NVMe-MI Commands & Effects Log Page: May Support 00:46:59.648 Data Area 4 for Telemetry Log: Not Supported 00:46:59.648 Error Log Page Entries Supported: 1 00:46:59.648 Keep Alive: Not Supported 00:46:59.648 00:46:59.648 NVM Command Set Attributes 00:46:59.648 ========================== 00:46:59.648 Submission Queue Entry Size 00:46:59.648 Max: 64 00:46:59.648 Min: 64 00:46:59.648 Completion Queue Entry Size 00:46:59.648 Max: 16 00:46:59.648 Min: 16 00:46:59.648 Number of Namespaces: 256 00:46:59.648 Compare Command: Supported 00:46:59.648 Write Uncorrectable Command: Not Supported 00:46:59.648 Dataset Management Command: Supported 00:46:59.648 Write Zeroes Command: Supported 00:46:59.648 Set Features Save Field: Supported 00:46:59.648 Reservations: Not Supported 00:46:59.648 Timestamp: Supported 00:46:59.648 Copy: Supported 00:46:59.648 Volatile Write Cache: Present 00:46:59.648 Atomic Write Unit (Normal): 1 00:46:59.648 Atomic Write Unit (PFail): 1 00:46:59.648 Atomic Compare & Write Unit: 1 00:46:59.648 Fused Compare & Write: Not Supported 00:46:59.648 Scatter-Gather List 00:46:59.648 SGL Command Set: Supported 00:46:59.648 SGL Keyed: Not Supported 00:46:59.648 SGL Bit Bucket Descriptor: Not Supported 00:46:59.648 SGL Metadata Pointer: Not Supported 00:46:59.648 Oversized SGL: Not Supported 00:46:59.648 SGL Metadata Address: Not Supported 00:46:59.648 SGL Offset: Not Supported 00:46:59.648 Transport SGL Data Block: Not Supported 00:46:59.648 Replay Protected Memory Block: Not Supported 00:46:59.648 00:46:59.648 Firmware Slot Information 00:46:59.648 ========================= 00:46:59.648 Active slot: 1 00:46:59.648 Slot 1 Firmware Revision: 1.0 00:46:59.648 00:46:59.648 00:46:59.648 Commands Supported and Effects 00:46:59.648 ============================== 00:46:59.648 Admin Commands 00:46:59.648 -------------- 00:46:59.648 Delete I/O Submission Queue (00h): Supported 00:46:59.648 Create I/O Submission Queue (01h): Supported 00:46:59.648 Get Log Page (02h): Supported 00:46:59.648 Delete I/O Completion Queue (04h): Supported 00:46:59.648 Create I/O Completion Queue (05h): Supported 00:46:59.648 Identify (06h): Supported 00:46:59.648 Abort (08h): Supported 00:46:59.648 Set Features (09h): Supported 00:46:59.648 Get Features (0Ah): Supported 00:46:59.648 Asynchronous Event Request (0Ch): Supported 00:46:59.648 Namespace Attachment (15h): Supported NS-Inventory-Change 00:46:59.648 Directive Send (19h): Supported 00:46:59.648 Directive Receive (1Ah): Supported 00:46:59.648 Virtualization Management (1Ch): Supported 00:46:59.648 Doorbell Buffer Config (7Ch): Supported 00:46:59.648 Format NVM (80h): Supported LBA-Change 00:46:59.648 I/O Commands 00:46:59.648 ------------ 00:46:59.648 Flush (00h): Supported LBA-Change 00:46:59.648 Write (01h): Supported LBA-Change 00:46:59.649 Read (02h): Supported 00:46:59.649 Compare (05h): Supported 00:46:59.649 Write Zeroes (08h): Supported LBA-Change 00:46:59.649 Dataset Management (09h): Supported LBA-Change 00:46:59.649 Unknown (0Ch): Supported 00:46:59.649 Unknown (12h): Supported 00:46:59.649 Copy (19h): Supported LBA-Change 00:46:59.649 Unknown (1Dh): Supported LBA-Change 00:46:59.649 00:46:59.649 Error Log 00:46:59.649 ========= 00:46:59.649 00:46:59.649 Arbitration 00:46:59.649 =========== 00:46:59.649 Arbitration Burst: no limit 00:46:59.649 00:46:59.649 Power Management 00:46:59.649 ================ 00:46:59.649 Number of Power States: 1 00:46:59.649 Current Power State: Power State #0 00:46:59.649 Power State #0: 00:46:59.649 Max Power: 25.00 W 00:46:59.649 Non-Operational State: Operational 00:46:59.649 Entry Latency: 16 microseconds 00:46:59.649 Exit Latency: 4 microseconds 00:46:59.649 Relative Read Throughput: 0 00:46:59.649 Relative Read Latency: 0 00:46:59.649 Relative Write Throughput: 0 00:46:59.649 Relative Write Latency: 0 00:46:59.649 Idle Power: Not Reported 00:46:59.649 Active Power: Not Reported 00:46:59.649 Non-Operational Permissive Mode: Not Supported 00:46:59.649 00:46:59.649 Health Information 00:46:59.649 ================== 00:46:59.649 Critical Warnings: 00:46:59.649 Available Spare Space: OK 00:46:59.649 Temperature: OK 00:46:59.649 Device Reliability: OK 00:46:59.649 Read Only: No 00:46:59.649 Volatile Memory Backup: OK 00:46:59.649 Current Temperature: 323 Kelvin (50 Celsius) 00:46:59.649 Temperature Threshold: 343 Kelvin (70 Celsius) 00:46:59.649 Available Spare: 0% 00:46:59.649 Available Spare Threshold: 0% 00:46:59.649 Life Percentage Used: 0% 00:46:59.649 Data Units Read: 4438 00:46:59.649 Data Units Written: 4094 00:46:59.649 Host Read Commands: 223636 00:46:59.649 Host Write Commands: 236611 00:46:59.649 Controller Busy Time: 0 minutes 00:46:59.649 Power Cycles: 0 00:46:59.649 Power On Hours: 0 hours 00:46:59.649 Unsafe Shutdowns: 0 00:46:59.649 Unrecoverable Media Errors: 0 00:46:59.649 Lifetime Error Log Entries: 0 00:46:59.649 Warning Temperature Time: 0 minutes 00:46:59.649 Critical Temperature Time: 0 minutes 00:46:59.649 00:46:59.649 Number of Queues 00:46:59.649 ================ 00:46:59.649 Number of I/O Submission Queues: 64 00:46:59.649 Number of I/O Completion Queues: 64 00:46:59.649 00:46:59.649 ZNS Specific Controller Data 00:46:59.649 ============================ 00:46:59.649 Zone Append Size Limit: 0 00:46:59.649 00:46:59.649 00:46:59.649 Active Namespaces 00:46:59.649 ================= 00:46:59.649 Namespace ID:1 00:46:59.649 Error Recovery Timeout: Unlimited 00:46:59.649 Command Set Identifier: NVM (00h) 00:46:59.649 Deallocate: Supported 00:46:59.649 Deallocated/Unwritten Error: Supported 00:46:59.649 Deallocated Read Value: All 0x00 00:46:59.649 Deallocate in Write Zeroes: Not Supported 00:46:59.649 Deallocated Guard Field: 0xFFFF 00:46:59.649 Flush: Supported 00:46:59.649 Reservation: Not Supported 00:46:59.649 Namespace Sharing Capabilities: Private 00:46:59.649 Size (in LBAs): 1310720 (5GiB) 00:46:59.649 Capacity (in LBAs): 1310720 (5GiB) 00:46:59.649 Utilization (in LBAs): 1310720 (5GiB) 00:46:59.649 Thin Provisioning: Not Supported 00:46:59.649 Per-NS Atomic Units: No 00:46:59.649 Maximum Single Source Range Length: 128 00:46:59.649 Maximum Copy Length: 128 00:46:59.649 Maximum Source Range Count: 128 00:46:59.649 NGUID/EUI64 Never Reused: No 00:46:59.649 Namespace Write Protected: No 00:46:59.649 Number of LBA Formats: 8 00:46:59.649 Current LBA Format: LBA Format #04 00:46:59.649 LBA Format #00: Data Size: 512 Metadata Size: 0 00:46:59.649 LBA Format #01: Data Size: 512 Metadata Size: 8 00:46:59.649 LBA Format #02: Data Size: 512 Metadata Size: 16 00:46:59.649 LBA Format #03: Data Size: 512 Metadata Size: 64 00:46:59.649 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:46:59.649 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:46:59.649 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:46:59.649 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:46:59.649 00:46:59.649 NVM Specific Namespace Data 00:46:59.649 =========================== 00:46:59.649 Logical Block Storage Tag Mask: 0 00:46:59.649 Protection Information Capabilities: 00:46:59.649 16b Guard Protection Information Storage Tag Support: No 00:46:59.649 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:46:59.649 Storage Tag Check Read Support: No 00:46:59.649 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.649 09:13:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:46:59.649 09:13:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:46:59.907 ===================================================== 00:46:59.907 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:46:59.907 ===================================================== 00:46:59.907 Controller Capabilities/Features 00:46:59.907 ================================ 00:46:59.907 Vendor ID: 1b36 00:46:59.907 Subsystem Vendor ID: 1af4 00:46:59.907 Serial Number: 12340 00:46:59.907 Model Number: QEMU NVMe Ctrl 00:46:59.907 Firmware Version: 8.0.0 00:46:59.907 Recommended Arb Burst: 6 00:46:59.907 IEEE OUI Identifier: 00 54 52 00:46:59.907 Multi-path I/O 00:46:59.907 May have multiple subsystem ports: No 00:46:59.907 May have multiple controllers: No 00:46:59.907 Associated with SR-IOV VF: No 00:46:59.907 Max Data Transfer Size: 524288 00:46:59.907 Max Number of Namespaces: 256 00:46:59.907 Max Number of I/O Queues: 64 00:46:59.907 NVMe Specification Version (VS): 1.4 00:46:59.907 NVMe Specification Version (Identify): 1.4 00:46:59.907 Maximum Queue Entries: 2048 00:46:59.907 Contiguous Queues Required: Yes 00:46:59.907 Arbitration Mechanisms Supported 00:46:59.907 Weighted Round Robin: Not Supported 00:46:59.907 Vendor Specific: Not Supported 00:46:59.907 Reset Timeout: 7500 ms 00:46:59.907 Doorbell Stride: 4 bytes 00:46:59.907 NVM Subsystem Reset: Not Supported 00:46:59.907 Command Sets Supported 00:46:59.907 NVM Command Set: Supported 00:46:59.907 Boot Partition: Not Supported 00:46:59.907 Memory Page Size Minimum: 4096 bytes 00:46:59.907 Memory Page Size Maximum: 65536 bytes 00:46:59.907 Persistent Memory Region: Not Supported 00:46:59.907 Optional Asynchronous Events Supported 00:46:59.907 Namespace Attribute Notices: Supported 00:46:59.907 Firmware Activation Notices: Not Supported 00:46:59.907 ANA Change Notices: Not Supported 00:46:59.907 PLE Aggregate Log Change Notices: Not Supported 00:46:59.907 LBA Status Info Alert Notices: Not Supported 00:46:59.907 EGE Aggregate Log Change Notices: Not Supported 00:46:59.907 Normal NVM Subsystem Shutdown event: Not Supported 00:46:59.907 Zone Descriptor Change Notices: Not Supported 00:46:59.907 Discovery Log Change Notices: Not Supported 00:46:59.907 Controller Attributes 00:46:59.907 128-bit Host Identifier: Not Supported 00:46:59.907 Non-Operational Permissive Mode: Not Supported 00:46:59.907 NVM Sets: Not Supported 00:46:59.907 Read Recovery Levels: Not Supported 00:46:59.907 Endurance Groups: Not Supported 00:46:59.907 Predictable Latency Mode: Not Supported 00:46:59.907 Traffic Based Keep ALive: Not Supported 00:46:59.907 Namespace Granularity: Not Supported 00:46:59.907 SQ Associations: Not Supported 00:46:59.907 UUID List: Not Supported 00:46:59.908 Multi-Domain Subsystem: Not Supported 00:46:59.908 Fixed Capacity Management: Not Supported 00:46:59.908 Variable Capacity Management: Not Supported 00:46:59.908 Delete Endurance Group: Not Supported 00:46:59.908 Delete NVM Set: Not Supported 00:46:59.908 Extended LBA Formats Supported: Supported 00:46:59.908 Flexible Data Placement Supported: Not Supported 00:46:59.908 00:46:59.908 Controller Memory Buffer Support 00:46:59.908 ================================ 00:46:59.908 Supported: No 00:46:59.908 00:46:59.908 Persistent Memory Region Support 00:46:59.908 ================================ 00:46:59.908 Supported: No 00:46:59.908 00:46:59.908 Admin Command Set Attributes 00:46:59.908 ============================ 00:46:59.908 Security Send/Receive: Not Supported 00:46:59.908 Format NVM: Supported 00:46:59.908 Firmware Activate/Download: Not Supported 00:46:59.908 Namespace Management: Supported 00:46:59.908 Device Self-Test: Not Supported 00:46:59.908 Directives: Supported 00:46:59.908 NVMe-MI: Not Supported 00:46:59.908 Virtualization Management: Not Supported 00:46:59.908 Doorbell Buffer Config: Supported 00:46:59.908 Get LBA Status Capability: Not Supported 00:46:59.908 Command & Feature Lockdown Capability: Not Supported 00:46:59.908 Abort Command Limit: 4 00:46:59.908 Async Event Request Limit: 4 00:46:59.908 Number of Firmware Slots: N/A 00:46:59.908 Firmware Slot 1 Read-Only: N/A 00:46:59.908 Firmware Activation Without Reset: N/A 00:46:59.908 Multiple Update Detection Support: N/A 00:46:59.908 Firmware Update Granularity: No Information Provided 00:46:59.908 Per-Namespace SMART Log: Yes 00:46:59.908 Asymmetric Namespace Access Log Page: Not Supported 00:46:59.908 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:46:59.908 Command Effects Log Page: Supported 00:46:59.908 Get Log Page Extended Data: Supported 00:46:59.908 Telemetry Log Pages: Not Supported 00:46:59.908 Persistent Event Log Pages: Not Supported 00:46:59.908 Supported Log Pages Log Page: May Support 00:46:59.908 Commands Supported & Effects Log Page: Not Supported 00:46:59.908 Feature Identifiers & Effects Log Page:May Support 00:46:59.908 NVMe-MI Commands & Effects Log Page: May Support 00:46:59.908 Data Area 4 for Telemetry Log: Not Supported 00:46:59.908 Error Log Page Entries Supported: 1 00:46:59.908 Keep Alive: Not Supported 00:46:59.908 00:46:59.908 NVM Command Set Attributes 00:46:59.908 ========================== 00:46:59.908 Submission Queue Entry Size 00:46:59.908 Max: 64 00:46:59.908 Min: 64 00:46:59.908 Completion Queue Entry Size 00:46:59.908 Max: 16 00:46:59.908 Min: 16 00:46:59.908 Number of Namespaces: 256 00:46:59.908 Compare Command: Supported 00:46:59.908 Write Uncorrectable Command: Not Supported 00:46:59.908 Dataset Management Command: Supported 00:46:59.908 Write Zeroes Command: Supported 00:46:59.908 Set Features Save Field: Supported 00:46:59.908 Reservations: Not Supported 00:46:59.908 Timestamp: Supported 00:46:59.908 Copy: Supported 00:46:59.908 Volatile Write Cache: Present 00:46:59.908 Atomic Write Unit (Normal): 1 00:46:59.908 Atomic Write Unit (PFail): 1 00:46:59.908 Atomic Compare & Write Unit: 1 00:46:59.908 Fused Compare & Write: Not Supported 00:46:59.908 Scatter-Gather List 00:46:59.908 SGL Command Set: Supported 00:46:59.908 SGL Keyed: Not Supported 00:46:59.908 SGL Bit Bucket Descriptor: Not Supported 00:46:59.908 SGL Metadata Pointer: Not Supported 00:46:59.908 Oversized SGL: Not Supported 00:46:59.908 SGL Metadata Address: Not Supported 00:46:59.908 SGL Offset: Not Supported 00:46:59.908 Transport SGL Data Block: Not Supported 00:46:59.908 Replay Protected Memory Block: Not Supported 00:46:59.908 00:46:59.908 Firmware Slot Information 00:46:59.908 ========================= 00:46:59.908 Active slot: 1 00:46:59.908 Slot 1 Firmware Revision: 1.0 00:46:59.908 00:46:59.908 00:46:59.908 Commands Supported and Effects 00:46:59.908 ============================== 00:46:59.908 Admin Commands 00:46:59.908 -------------- 00:46:59.908 Delete I/O Submission Queue (00h): Supported 00:46:59.908 Create I/O Submission Queue (01h): Supported 00:46:59.908 Get Log Page (02h): Supported 00:46:59.908 Delete I/O Completion Queue (04h): Supported 00:46:59.908 Create I/O Completion Queue (05h): Supported 00:46:59.908 Identify (06h): Supported 00:46:59.908 Abort (08h): Supported 00:46:59.908 Set Features (09h): Supported 00:46:59.908 Get Features (0Ah): Supported 00:46:59.908 Asynchronous Event Request (0Ch): Supported 00:46:59.908 Namespace Attachment (15h): Supported NS-Inventory-Change 00:46:59.908 Directive Send (19h): Supported 00:46:59.908 Directive Receive (1Ah): Supported 00:46:59.908 Virtualization Management (1Ch): Supported 00:46:59.908 Doorbell Buffer Config (7Ch): Supported 00:46:59.908 Format NVM (80h): Supported LBA-Change 00:46:59.908 I/O Commands 00:46:59.908 ------------ 00:46:59.908 Flush (00h): Supported LBA-Change 00:46:59.908 Write (01h): Supported LBA-Change 00:46:59.908 Read (02h): Supported 00:46:59.908 Compare (05h): Supported 00:46:59.908 Write Zeroes (08h): Supported LBA-Change 00:46:59.908 Dataset Management (09h): Supported LBA-Change 00:46:59.908 Unknown (0Ch): Supported 00:46:59.908 Unknown (12h): Supported 00:46:59.908 Copy (19h): Supported LBA-Change 00:46:59.908 Unknown (1Dh): Supported LBA-Change 00:46:59.908 00:46:59.908 Error Log 00:46:59.908 ========= 00:46:59.908 00:46:59.908 Arbitration 00:46:59.908 =========== 00:46:59.908 Arbitration Burst: no limit 00:46:59.908 00:46:59.908 Power Management 00:46:59.908 ================ 00:46:59.908 Number of Power States: 1 00:46:59.908 Current Power State: Power State #0 00:46:59.908 Power State #0: 00:46:59.908 Max Power: 25.00 W 00:46:59.908 Non-Operational State: Operational 00:46:59.908 Entry Latency: 16 microseconds 00:46:59.908 Exit Latency: 4 microseconds 00:46:59.908 Relative Read Throughput: 0 00:46:59.908 Relative Read Latency: 0 00:46:59.908 Relative Write Throughput: 0 00:46:59.908 Relative Write Latency: 0 00:46:59.908 Idle Power: Not Reported 00:46:59.908 Active Power: Not Reported 00:46:59.908 Non-Operational Permissive Mode: Not Supported 00:46:59.908 00:46:59.908 Health Information 00:46:59.908 ================== 00:46:59.908 Critical Warnings: 00:46:59.908 Available Spare Space: OK 00:46:59.908 Temperature: OK 00:46:59.908 Device Reliability: OK 00:46:59.908 Read Only: No 00:46:59.908 Volatile Memory Backup: OK 00:46:59.908 Current Temperature: 323 Kelvin (50 Celsius) 00:46:59.908 Temperature Threshold: 343 Kelvin (70 Celsius) 00:46:59.908 Available Spare: 0% 00:46:59.908 Available Spare Threshold: 0% 00:46:59.908 Life Percentage Used: 0% 00:46:59.908 Data Units Read: 4438 00:46:59.908 Data Units Written: 4094 00:46:59.908 Host Read Commands: 223636 00:46:59.908 Host Write Commands: 236611 00:46:59.908 Controller Busy Time: 0 minutes 00:46:59.908 Power Cycles: 0 00:46:59.908 Power On Hours: 0 hours 00:46:59.908 Unsafe Shutdowns: 0 00:46:59.908 Unrecoverable Media Errors: 0 00:46:59.908 Lifetime Error Log Entries: 0 00:46:59.908 Warning Temperature Time: 0 minutes 00:46:59.908 Critical Temperature Time: 0 minutes 00:46:59.908 00:46:59.908 Number of Queues 00:46:59.908 ================ 00:46:59.908 Number of I/O Submission Queues: 64 00:46:59.908 Number of I/O Completion Queues: 64 00:46:59.908 00:46:59.908 ZNS Specific Controller Data 00:46:59.908 ============================ 00:46:59.908 Zone Append Size Limit: 0 00:46:59.908 00:46:59.908 00:46:59.908 Active Namespaces 00:46:59.908 ================= 00:46:59.908 Namespace ID:1 00:46:59.908 Error Recovery Timeout: Unlimited 00:46:59.908 Command Set Identifier: NVM (00h) 00:46:59.908 Deallocate: Supported 00:46:59.908 Deallocated/Unwritten Error: Supported 00:46:59.908 Deallocated Read Value: All 0x00 00:46:59.908 Deallocate in Write Zeroes: Not Supported 00:46:59.908 Deallocated Guard Field: 0xFFFF 00:46:59.908 Flush: Supported 00:46:59.908 Reservation: Not Supported 00:46:59.908 Namespace Sharing Capabilities: Private 00:46:59.908 Size (in LBAs): 1310720 (5GiB) 00:46:59.908 Capacity (in LBAs): 1310720 (5GiB) 00:46:59.908 Utilization (in LBAs): 1310720 (5GiB) 00:46:59.908 Thin Provisioning: Not Supported 00:46:59.908 Per-NS Atomic Units: No 00:46:59.908 Maximum Single Source Range Length: 128 00:46:59.908 Maximum Copy Length: 128 00:46:59.908 Maximum Source Range Count: 128 00:46:59.908 NGUID/EUI64 Never Reused: No 00:46:59.908 Namespace Write Protected: No 00:46:59.908 Number of LBA Formats: 8 00:46:59.908 Current LBA Format: LBA Format #04 00:46:59.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:46:59.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:46:59.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:46:59.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:46:59.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:46:59.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:46:59.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:46:59.909 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:46:59.909 00:46:59.909 NVM Specific Namespace Data 00:46:59.909 =========================== 00:46:59.909 Logical Block Storage Tag Mask: 0 00:46:59.909 Protection Information Capabilities: 00:46:59.909 16b Guard Protection Information Storage Tag Support: No 00:46:59.909 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:46:59.909 Storage Tag Check Read Support: No 00:46:59.909 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:46:59.909 00:46:59.909 real 0m0.704s 00:46:59.909 user 0m0.295s 00:46:59.909 sys 0m0.304s 00:46:59.909 09:13:35 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:59.909 ************************************ 00:46:59.909 END TEST nvme_identify 00:46:59.909 ************************************ 00:46:59.909 09:13:35 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:46:59.909 09:13:35 nvme -- common/autotest_common.sh@1142 -- # return 0 00:46:59.909 09:13:35 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:46:59.909 09:13:35 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:46:59.909 09:13:35 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:59.909 09:13:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:46:59.909 ************************************ 00:46:59.909 START TEST nvme_perf 00:46:59.909 ************************************ 00:46:59.909 09:13:35 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:46:59.909 09:13:35 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:47:01.283 Initializing NVMe Controllers 00:47:01.284 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:01.284 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:01.284 Initialization complete. Launching workers. 00:47:01.284 ======================================================== 00:47:01.284 Latency(us) 00:47:01.284 Device Information : IOPS MiB/s Average min max 00:47:01.284 PCIE (0000:00:10.0) NSID 1 from core 0: 76445.52 895.85 1672.90 708.39 6812.41 00:47:01.284 ======================================================== 00:47:01.284 Total : 76445.52 895.85 1672.90 708.39 6812.41 00:47:01.284 00:47:01.284 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:01.284 ================================================================================= 00:47:01.284 1.00000% : 908.567us 00:47:01.284 10.00000% : 1094.749us 00:47:01.284 25.00000% : 1295.825us 00:47:01.284 50.00000% : 1616.058us 00:47:01.284 75.00000% : 1951.185us 00:47:01.284 90.00000% : 2368.233us 00:47:01.284 95.00000% : 2621.440us 00:47:01.284 98.00000% : 2844.858us 00:47:01.284 99.00000% : 3142.749us 00:47:01.284 99.50000% : 3470.429us 00:47:01.284 99.90000% : 4170.473us 00:47:01.284 99.99000% : 6523.811us 00:47:01.284 99.99900% : 6821.702us 00:47:01.284 99.99990% : 6821.702us 00:47:01.284 99.99999% : 6821.702us 00:47:01.284 00:47:01.284 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:01.284 ============================================================================== 00:47:01.284 Range in us Cumulative IO count 00:47:01.284 707.491 - 711.215: 0.0026% ( 2) 00:47:01.284 718.662 - 722.385: 0.0039% ( 1) 00:47:01.284 722.385 - 726.109: 0.0065% ( 2) 00:47:01.284 726.109 - 729.833: 0.0078% ( 1) 00:47:01.284 733.556 - 737.280: 0.0118% ( 3) 00:47:01.284 741.004 - 744.727: 0.0131% ( 1) 00:47:01.284 752.175 - 755.898: 0.0170% ( 3) 00:47:01.284 759.622 - 763.345: 0.0209% ( 3) 00:47:01.284 763.345 - 767.069: 0.0222% ( 1) 00:47:01.284 767.069 - 770.793: 0.0262% ( 3) 00:47:01.284 770.793 - 774.516: 0.0275% ( 1) 00:47:01.284 774.516 - 778.240: 0.0301% ( 2) 00:47:01.284 778.240 - 781.964: 0.0340% ( 3) 00:47:01.284 789.411 - 793.135: 0.0379% ( 3) 00:47:01.284 793.135 - 796.858: 0.0445% ( 5) 00:47:01.284 796.858 - 800.582: 0.0484% ( 3) 00:47:01.284 800.582 - 804.305: 0.0536% ( 4) 00:47:01.284 804.305 - 808.029: 0.0562% ( 2) 00:47:01.284 808.029 - 811.753: 0.0601% ( 3) 00:47:01.284 811.753 - 815.476: 0.0680% ( 6) 00:47:01.284 815.476 - 819.200: 0.0824% ( 11) 00:47:01.284 819.200 - 822.924: 0.0941% ( 9) 00:47:01.284 822.924 - 826.647: 0.1059% ( 9) 00:47:01.284 826.647 - 830.371: 0.1229% ( 13) 00:47:01.284 830.371 - 834.095: 0.1399% ( 13) 00:47:01.284 834.095 - 837.818: 0.1608% ( 16) 00:47:01.284 837.818 - 841.542: 0.1817% ( 16) 00:47:01.284 841.542 - 845.265: 0.2001% ( 14) 00:47:01.284 845.265 - 848.989: 0.2301% ( 23) 00:47:01.284 848.989 - 852.713: 0.2563% ( 20) 00:47:01.284 852.713 - 856.436: 0.2772% ( 16) 00:47:01.284 856.436 - 860.160: 0.3190% ( 32) 00:47:01.284 860.160 - 863.884: 0.3635% ( 34) 00:47:01.284 863.884 - 867.607: 0.4014% ( 29) 00:47:01.284 867.607 - 871.331: 0.4420% ( 31) 00:47:01.284 871.331 - 875.055: 0.4969% ( 42) 00:47:01.284 875.055 - 878.778: 0.5518% ( 42) 00:47:01.284 878.778 - 882.502: 0.6080% ( 43) 00:47:01.284 882.502 - 886.225: 0.6603% ( 40) 00:47:01.284 886.225 - 889.949: 0.7231% ( 48) 00:47:01.284 889.949 - 893.673: 0.7754% ( 40) 00:47:01.284 893.673 - 897.396: 0.8368% ( 47) 00:47:01.284 897.396 - 901.120: 0.9035% ( 51) 00:47:01.284 901.120 - 904.844: 0.9624% ( 45) 00:47:01.284 904.844 - 908.567: 1.0277% ( 50) 00:47:01.284 908.567 - 912.291: 1.1101% ( 63) 00:47:01.284 912.291 - 916.015: 1.1794% ( 53) 00:47:01.284 916.015 - 919.738: 1.2618% ( 63) 00:47:01.284 919.738 - 923.462: 1.3638% ( 78) 00:47:01.284 923.462 - 927.185: 1.4788% ( 88) 00:47:01.284 927.185 - 930.909: 1.5887% ( 84) 00:47:01.284 930.909 - 934.633: 1.6959% ( 82) 00:47:01.284 934.633 - 938.356: 1.7979% ( 78) 00:47:01.284 938.356 - 942.080: 1.9116% ( 87) 00:47:01.284 942.080 - 945.804: 2.0175% ( 81) 00:47:01.284 945.804 - 949.527: 2.1405% ( 94) 00:47:01.284 949.527 - 953.251: 2.2856% ( 111) 00:47:01.284 953.251 - 960.698: 2.5589% ( 209) 00:47:01.284 960.698 - 968.145: 2.8400% ( 215) 00:47:01.284 968.145 - 975.593: 3.1643% ( 248) 00:47:01.284 975.593 - 983.040: 3.5082% ( 263) 00:47:01.284 983.040 - 990.487: 3.8429% ( 256) 00:47:01.284 990.487 - 997.935: 4.2168% ( 286) 00:47:01.284 997.935 - 1005.382: 4.6078% ( 299) 00:47:01.284 1005.382 - 1012.829: 5.0341% ( 326) 00:47:01.284 1012.829 - 1020.276: 5.4433% ( 313) 00:47:01.284 1020.276 - 1027.724: 5.8617% ( 320) 00:47:01.284 1027.724 - 1035.171: 6.3259% ( 355) 00:47:01.284 1035.171 - 1042.618: 6.7352% ( 313) 00:47:01.284 1042.618 - 1050.065: 7.2425% ( 388) 00:47:01.284 1050.065 - 1057.513: 7.7054% ( 354) 00:47:01.284 1057.513 - 1064.960: 8.1905% ( 371) 00:47:01.284 1064.960 - 1072.407: 8.6730% ( 369) 00:47:01.284 1072.407 - 1079.855: 9.1685% ( 379) 00:47:01.284 1079.855 - 1087.302: 9.6576% ( 374) 00:47:01.284 1087.302 - 1094.749: 10.1793% ( 399) 00:47:01.284 1094.749 - 1102.196: 10.7049% ( 402) 00:47:01.284 1102.196 - 1109.644: 11.2109% ( 387) 00:47:01.284 1109.644 - 1117.091: 11.7575% ( 418) 00:47:01.284 1117.091 - 1124.538: 12.2596% ( 384) 00:47:01.284 1124.538 - 1131.985: 12.8676% ( 465) 00:47:01.284 1131.985 - 1139.433: 13.3645% ( 380) 00:47:01.284 1139.433 - 1146.880: 13.9620% ( 457) 00:47:01.284 1146.880 - 1154.327: 14.4837% ( 399) 00:47:01.284 1154.327 - 1161.775: 15.0512% ( 434) 00:47:01.284 1161.775 - 1169.222: 15.5951% ( 416) 00:47:01.284 1169.222 - 1176.669: 16.1678% ( 438) 00:47:01.284 1176.669 - 1184.116: 16.7549% ( 449) 00:47:01.284 1184.116 - 1191.564: 17.2949% ( 413) 00:47:01.284 1191.564 - 1199.011: 17.8977% ( 461) 00:47:01.284 1199.011 - 1206.458: 18.4521% ( 424) 00:47:01.284 1206.458 - 1213.905: 19.0641% ( 468) 00:47:01.284 1213.905 - 1221.353: 19.6106% ( 418) 00:47:01.284 1221.353 - 1228.800: 20.2304% ( 474) 00:47:01.284 1228.800 - 1236.247: 20.7835% ( 423) 00:47:01.284 1236.247 - 1243.695: 21.3993% ( 471) 00:47:01.284 1243.695 - 1251.142: 21.9550% ( 425) 00:47:01.284 1251.142 - 1258.589: 22.5526% ( 457) 00:47:01.284 1258.589 - 1266.036: 23.1227% ( 436) 00:47:01.284 1266.036 - 1273.484: 23.7359% ( 469) 00:47:01.284 1273.484 - 1280.931: 24.2694% ( 408) 00:47:01.284 1280.931 - 1288.378: 24.8879% ( 473) 00:47:01.284 1288.378 - 1295.825: 25.4449% ( 426) 00:47:01.284 1295.825 - 1303.273: 26.0137% ( 435) 00:47:01.284 1303.273 - 1310.720: 26.6478% ( 485) 00:47:01.284 1310.720 - 1318.167: 27.1735% ( 402) 00:47:01.284 1318.167 - 1325.615: 27.7893% ( 471) 00:47:01.284 1325.615 - 1333.062: 28.3398% ( 421) 00:47:01.284 1333.062 - 1340.509: 28.9557% ( 471) 00:47:01.284 1340.509 - 1347.956: 29.4970% ( 414) 00:47:01.284 1347.956 - 1355.404: 30.1037% ( 464) 00:47:01.284 1355.404 - 1362.851: 30.6816% ( 442) 00:47:01.284 1362.851 - 1370.298: 31.2360% ( 424) 00:47:01.284 1370.298 - 1377.745: 31.8558% ( 474) 00:47:01.284 1377.745 - 1385.193: 32.4246% ( 435) 00:47:01.284 1385.193 - 1392.640: 33.0248% ( 459) 00:47:01.284 1392.640 - 1400.087: 33.5831% ( 427) 00:47:01.284 1400.087 - 1407.535: 34.2055% ( 476) 00:47:01.284 1407.535 - 1414.982: 34.7586% ( 423) 00:47:01.284 1414.982 - 1422.429: 35.3443% ( 448) 00:47:01.284 1422.429 - 1429.876: 35.9262% ( 445) 00:47:01.284 1429.876 - 1437.324: 36.4976% ( 437) 00:47:01.284 1437.324 - 1444.771: 37.0965% ( 458) 00:47:01.284 1444.771 - 1452.218: 37.6665% ( 436) 00:47:01.284 1452.218 - 1459.665: 38.2576% ( 452) 00:47:01.284 1459.665 - 1467.113: 38.8499% ( 453) 00:47:01.284 1467.113 - 1474.560: 39.4291% ( 443) 00:47:01.284 1474.560 - 1482.007: 40.0214% ( 453) 00:47:01.284 1482.007 - 1489.455: 40.5902% ( 435) 00:47:01.284 1489.455 - 1496.902: 41.1577% ( 434) 00:47:01.284 1496.902 - 1504.349: 41.7932% ( 486) 00:47:01.284 1504.349 - 1511.796: 42.3136% ( 398) 00:47:01.284 1511.796 - 1519.244: 42.9190% ( 463) 00:47:01.284 1519.244 - 1526.691: 43.4851% ( 433) 00:47:01.284 1526.691 - 1534.138: 44.0735% ( 450) 00:47:01.284 1534.138 - 1541.585: 44.6606% ( 449) 00:47:01.284 1541.585 - 1549.033: 45.2529% ( 453) 00:47:01.284 1549.033 - 1556.480: 45.8296% ( 441) 00:47:01.284 1556.480 - 1563.927: 46.4350% ( 463) 00:47:01.284 1563.927 - 1571.375: 47.0090% ( 439) 00:47:01.284 1571.375 - 1578.822: 47.5895% ( 444) 00:47:01.284 1578.822 - 1586.269: 48.2015% ( 468) 00:47:01.284 1586.269 - 1593.716: 48.7703% ( 435) 00:47:01.284 1593.716 - 1601.164: 49.3691% ( 458) 00:47:01.284 1601.164 - 1608.611: 49.9627% ( 454) 00:47:01.284 1608.611 - 1616.058: 50.5590% ( 456) 00:47:01.284 1616.058 - 1623.505: 51.1251% ( 433) 00:47:01.284 1623.505 - 1630.953: 51.7201% ( 455) 00:47:01.284 1630.953 - 1638.400: 52.3124% ( 453) 00:47:01.284 1638.400 - 1645.847: 52.9073% ( 455) 00:47:01.284 1645.847 - 1653.295: 53.4827% ( 440) 00:47:01.284 1653.295 - 1660.742: 54.0946% ( 468) 00:47:01.284 1660.742 - 1668.189: 54.6869% ( 453) 00:47:01.284 1668.189 - 1675.636: 55.2596% ( 438) 00:47:01.284 1675.636 - 1683.084: 55.8545% ( 455) 00:47:01.284 1683.084 - 1690.531: 56.4534% ( 458) 00:47:01.284 1690.531 - 1697.978: 57.0483% ( 455) 00:47:01.284 1697.978 - 1705.425: 57.6250% ( 441) 00:47:01.284 1705.425 - 1712.873: 58.2225% ( 457) 00:47:01.284 1712.873 - 1720.320: 58.7665% ( 416) 00:47:01.284 1720.320 - 1727.767: 59.4019% ( 486) 00:47:01.284 1727.767 - 1735.215: 59.9694% ( 434) 00:47:01.284 1735.215 - 1742.662: 60.5473% ( 442) 00:47:01.284 1742.662 - 1750.109: 61.1580% ( 467) 00:47:01.284 1750.109 - 1757.556: 61.7346% ( 441) 00:47:01.284 1757.556 - 1765.004: 62.3099% ( 440) 00:47:01.284 1765.004 - 1772.451: 62.8839% ( 439) 00:47:01.284 1772.451 - 1779.898: 63.4776% ( 454) 00:47:01.284 1779.898 - 1787.345: 64.0503% ( 438) 00:47:01.285 1787.345 - 1794.793: 64.6125% ( 430) 00:47:01.285 1794.793 - 1802.240: 65.1931% ( 444) 00:47:01.285 1802.240 - 1809.687: 65.7854% ( 453) 00:47:01.285 1809.687 - 1817.135: 66.3319% ( 418) 00:47:01.285 1817.135 - 1824.582: 66.9138% ( 445) 00:47:01.285 1824.582 - 1832.029: 67.4381% ( 401) 00:47:01.285 1832.029 - 1839.476: 67.9912% ( 423) 00:47:01.285 1839.476 - 1846.924: 68.5325% ( 414) 00:47:01.285 1846.924 - 1854.371: 69.0386% ( 387) 00:47:01.285 1854.371 - 1861.818: 69.5642% ( 402) 00:47:01.285 1861.818 - 1869.265: 70.0702% ( 387) 00:47:01.285 1869.265 - 1876.713: 70.6259% ( 425) 00:47:01.285 1876.713 - 1884.160: 71.0836% ( 350) 00:47:01.285 1884.160 - 1891.607: 71.6380% ( 424) 00:47:01.285 1891.607 - 1899.055: 72.1231% ( 371) 00:47:01.285 1899.055 - 1906.502: 72.6265% ( 385) 00:47:01.285 1906.502 - 1921.396: 73.6084% ( 751) 00:47:01.285 1921.396 - 1936.291: 74.5956% ( 755) 00:47:01.285 1936.291 - 1951.185: 75.5868% ( 758) 00:47:01.285 1951.185 - 1966.080: 76.5073% ( 704) 00:47:01.285 1966.080 - 1980.975: 77.3925% ( 677) 00:47:01.285 1980.975 - 1995.869: 78.2515% ( 657) 00:47:01.285 1995.869 - 2010.764: 79.0531% ( 613) 00:47:01.285 2010.764 - 2025.658: 79.7958% ( 568) 00:47:01.285 2025.658 - 2040.553: 80.4875% ( 529) 00:47:01.285 2040.553 - 2055.447: 81.1608% ( 515) 00:47:01.285 2055.447 - 2070.342: 81.7584% ( 457) 00:47:01.285 2070.342 - 2085.236: 82.3481% ( 451) 00:47:01.285 2085.236 - 2100.131: 82.8816% ( 408) 00:47:01.285 2100.131 - 2115.025: 83.3994% ( 396) 00:47:01.285 2115.025 - 2129.920: 83.8858% ( 372) 00:47:01.285 2129.920 - 2144.815: 84.3774% ( 376) 00:47:01.285 2144.815 - 2159.709: 84.8364% ( 351) 00:47:01.285 2159.709 - 2174.604: 85.2901% ( 347) 00:47:01.285 2174.604 - 2189.498: 85.7190% ( 328) 00:47:01.285 2189.498 - 2204.393: 86.1583% ( 336) 00:47:01.285 2204.393 - 2219.287: 86.5571% ( 305) 00:47:01.285 2219.287 - 2234.182: 86.9572% ( 306) 00:47:01.285 2234.182 - 2249.076: 87.3560% ( 305) 00:47:01.285 2249.076 - 2263.971: 87.7457% ( 298) 00:47:01.285 2263.971 - 2278.865: 88.0908% ( 264) 00:47:01.285 2278.865 - 2293.760: 88.4334% ( 262) 00:47:01.285 2293.760 - 2308.655: 88.7669% ( 255) 00:47:01.285 2308.655 - 2323.549: 89.0977% ( 253) 00:47:01.285 2323.549 - 2338.444: 89.4272% ( 252) 00:47:01.285 2338.444 - 2353.338: 89.7475% ( 245) 00:47:01.285 2353.338 - 2368.233: 90.0613% ( 240) 00:47:01.285 2368.233 - 2383.127: 90.3686% ( 235) 00:47:01.285 2383.127 - 2398.022: 90.6759% ( 235) 00:47:01.285 2398.022 - 2412.916: 90.9949% ( 244) 00:47:01.285 2412.916 - 2427.811: 91.2852% ( 222) 00:47:01.285 2427.811 - 2442.705: 91.5925% ( 235) 00:47:01.285 2442.705 - 2457.600: 91.8945% ( 231) 00:47:01.285 2457.600 - 2472.495: 92.2018% ( 235) 00:47:01.285 2472.495 - 2487.389: 92.4947% ( 224) 00:47:01.285 2487.389 - 2502.284: 92.8019% ( 235) 00:47:01.285 2502.284 - 2517.178: 93.0948% ( 224) 00:47:01.285 2517.178 - 2532.073: 93.3786% ( 217) 00:47:01.285 2532.073 - 2546.967: 93.6479% ( 206) 00:47:01.285 2546.967 - 2561.862: 93.9356% ( 220) 00:47:01.285 2561.862 - 2576.756: 94.2049% ( 206) 00:47:01.285 2576.756 - 2591.651: 94.4848% ( 214) 00:47:01.285 2591.651 - 2606.545: 94.7410% ( 196) 00:47:01.285 2606.545 - 2621.440: 95.0052% ( 202) 00:47:01.285 2621.440 - 2636.335: 95.2667% ( 200) 00:47:01.285 2636.335 - 2651.229: 95.5203% ( 194) 00:47:01.285 2651.229 - 2666.124: 95.7714% ( 192) 00:47:01.285 2666.124 - 2681.018: 96.0224% ( 192) 00:47:01.285 2681.018 - 2695.913: 96.2408% ( 167) 00:47:01.285 2695.913 - 2710.807: 96.4722% ( 177) 00:47:01.285 2710.807 - 2725.702: 96.6788% ( 158) 00:47:01.285 2725.702 - 2740.596: 96.8906% ( 162) 00:47:01.285 2740.596 - 2755.491: 97.0763% ( 142) 00:47:01.285 2755.491 - 2770.385: 97.2594% ( 140) 00:47:01.285 2770.385 - 2785.280: 97.4228% ( 125) 00:47:01.285 2785.280 - 2800.175: 97.5863% ( 125) 00:47:01.285 2800.175 - 2815.069: 97.7366% ( 115) 00:47:01.285 2815.069 - 2829.964: 97.8792% ( 109) 00:47:01.285 2829.964 - 2844.858: 98.0060% ( 97) 00:47:01.285 2844.858 - 2859.753: 98.1197% ( 87) 00:47:01.285 2859.753 - 2874.647: 98.2243% ( 80) 00:47:01.285 2874.647 - 2889.542: 98.3146% ( 69) 00:47:01.285 2889.542 - 2904.436: 98.3891% ( 57) 00:47:01.285 2904.436 - 2919.331: 98.4519% ( 48) 00:47:01.285 2919.331 - 2934.225: 98.5068% ( 42) 00:47:01.285 2934.225 - 2949.120: 98.5565% ( 38) 00:47:01.285 2949.120 - 2964.015: 98.5970% ( 31) 00:47:01.285 2964.015 - 2978.909: 98.6467% ( 38) 00:47:01.285 2978.909 - 2993.804: 98.6859% ( 30) 00:47:01.285 2993.804 - 3008.698: 98.7238% ( 29) 00:47:01.285 3008.698 - 3023.593: 98.7618% ( 29) 00:47:01.285 3023.593 - 3038.487: 98.7957% ( 26) 00:47:01.285 3038.487 - 3053.382: 98.8311% ( 27) 00:47:01.285 3053.382 - 3068.276: 98.8664% ( 27) 00:47:01.285 3068.276 - 3083.171: 98.8977% ( 24) 00:47:01.285 3083.171 - 3098.065: 98.9304% ( 25) 00:47:01.285 3098.065 - 3112.960: 98.9618% ( 24) 00:47:01.285 3112.960 - 3127.855: 98.9958% ( 26) 00:47:01.285 3127.855 - 3142.749: 99.0233% ( 21) 00:47:01.285 3142.749 - 3157.644: 99.0533% ( 23) 00:47:01.285 3157.644 - 3172.538: 99.0847% ( 24) 00:47:01.285 3172.538 - 3187.433: 99.1122% ( 21) 00:47:01.285 3187.433 - 3202.327: 99.1396% ( 21) 00:47:01.285 3202.327 - 3217.222: 99.1658% ( 20) 00:47:01.285 3217.222 - 3232.116: 99.1932% ( 21) 00:47:01.285 3232.116 - 3247.011: 99.2181% ( 19) 00:47:01.285 3247.011 - 3261.905: 99.2442% ( 20) 00:47:01.285 3261.905 - 3276.800: 99.2678% ( 18) 00:47:01.285 3276.800 - 3291.695: 99.2900% ( 17) 00:47:01.285 3291.695 - 3306.589: 99.3109% ( 16) 00:47:01.285 3306.589 - 3321.484: 99.3332% ( 17) 00:47:01.285 3321.484 - 3336.378: 99.3528% ( 15) 00:47:01.285 3336.378 - 3351.273: 99.3724% ( 15) 00:47:01.285 3351.273 - 3366.167: 99.3881% ( 12) 00:47:01.285 3366.167 - 3381.062: 99.4064% ( 14) 00:47:01.285 3381.062 - 3395.956: 99.4208% ( 11) 00:47:01.285 3395.956 - 3410.851: 99.4417% ( 16) 00:47:01.285 3410.851 - 3425.745: 99.4574% ( 12) 00:47:01.285 3425.745 - 3440.640: 99.4718% ( 11) 00:47:01.285 3440.640 - 3455.535: 99.4901% ( 14) 00:47:01.285 3455.535 - 3470.429: 99.5084% ( 14) 00:47:01.285 3470.429 - 3485.324: 99.5267% ( 14) 00:47:01.285 3485.324 - 3500.218: 99.5437% ( 13) 00:47:01.285 3500.218 - 3515.113: 99.5594% ( 12) 00:47:01.285 3515.113 - 3530.007: 99.5764% ( 13) 00:47:01.285 3530.007 - 3544.902: 99.5881% ( 9) 00:47:01.285 3544.902 - 3559.796: 99.6038% ( 12) 00:47:01.285 3559.796 - 3574.691: 99.6195% ( 12) 00:47:01.285 3574.691 - 3589.585: 99.6378% ( 14) 00:47:01.285 3589.585 - 3604.480: 99.6522% ( 11) 00:47:01.285 3604.480 - 3619.375: 99.6640% ( 9) 00:47:01.285 3619.375 - 3634.269: 99.6783% ( 11) 00:47:01.285 3634.269 - 3649.164: 99.6940% ( 12) 00:47:01.285 3649.164 - 3664.058: 99.7071% ( 10) 00:47:01.285 3664.058 - 3678.953: 99.7189% ( 9) 00:47:01.285 3678.953 - 3693.847: 99.7333% ( 11) 00:47:01.285 3693.847 - 3708.742: 99.7450% ( 9) 00:47:01.285 3708.742 - 3723.636: 99.7594% ( 11) 00:47:01.285 3723.636 - 3738.531: 99.7699% ( 8) 00:47:01.285 3738.531 - 3753.425: 99.7777% ( 6) 00:47:01.285 3753.425 - 3768.320: 99.7856% ( 6) 00:47:01.285 3768.320 - 3783.215: 99.7947% ( 7) 00:47:01.285 3783.215 - 3798.109: 99.8039% ( 7) 00:47:01.285 3798.109 - 3813.004: 99.8104% ( 5) 00:47:01.285 3813.004 - 3842.793: 99.8261% ( 12) 00:47:01.285 3842.793 - 3872.582: 99.8418% ( 12) 00:47:01.285 3872.582 - 3902.371: 99.8562% ( 11) 00:47:01.285 3902.371 - 3932.160: 99.8692% ( 10) 00:47:01.285 3932.160 - 3961.949: 99.8810% ( 9) 00:47:01.285 3961.949 - 3991.738: 99.8889% ( 6) 00:47:01.285 3991.738 - 4021.527: 99.8941% ( 4) 00:47:01.285 4021.527 - 4051.316: 99.8954% ( 1) 00:47:01.285 4051.316 - 4081.105: 99.8967% ( 1) 00:47:01.285 4081.105 - 4110.895: 99.8980% ( 1) 00:47:01.285 4110.895 - 4140.684: 99.8993% ( 1) 00:47:01.285 4140.684 - 4170.473: 99.9006% ( 1) 00:47:01.285 4170.473 - 4200.262: 99.9019% ( 1) 00:47:01.285 4200.262 - 4230.051: 99.9032% ( 1) 00:47:01.285 4230.051 - 4259.840: 99.9045% ( 1) 00:47:01.285 4259.840 - 4289.629: 99.9059% ( 1) 00:47:01.285 4319.418 - 4349.207: 99.9072% ( 1) 00:47:01.285 4349.207 - 4378.996: 99.9085% ( 1) 00:47:01.285 4378.996 - 4408.785: 99.9098% ( 1) 00:47:01.285 4408.785 - 4438.575: 99.9111% ( 1) 00:47:01.285 4438.575 - 4468.364: 99.9124% ( 1) 00:47:01.285 4468.364 - 4498.153: 99.9137% ( 1) 00:47:01.285 4498.153 - 4527.942: 99.9150% ( 1) 00:47:01.285 4527.942 - 4557.731: 99.9163% ( 1) 00:47:01.285 4587.520 - 4617.309: 99.9176% ( 1) 00:47:01.285 4617.309 - 4647.098: 99.9189% ( 1) 00:47:01.285 4647.098 - 4676.887: 99.9202% ( 1) 00:47:01.285 4706.676 - 4736.465: 99.9215% ( 1) 00:47:01.285 4736.465 - 4766.255: 99.9229% ( 1) 00:47:01.285 4766.255 - 4796.044: 99.9242% ( 1) 00:47:01.285 4796.044 - 4825.833: 99.9255% ( 1) 00:47:01.285 4825.833 - 4855.622: 99.9268% ( 1) 00:47:01.285 4855.622 - 4885.411: 99.9281% ( 1) 00:47:01.285 4885.411 - 4915.200: 99.9294% ( 1) 00:47:01.285 4944.989 - 4974.778: 99.9307% ( 1) 00:47:01.285 4974.778 - 5004.567: 99.9320% ( 1) 00:47:01.285 5004.567 - 5034.356: 99.9333% ( 1) 00:47:01.285 5034.356 - 5064.145: 99.9346% ( 1) 00:47:01.285 5064.145 - 5093.935: 99.9359% ( 1) 00:47:01.285 5093.935 - 5123.724: 99.9372% ( 1) 00:47:01.285 5123.724 - 5153.513: 99.9385% ( 1) 00:47:01.285 5153.513 - 5183.302: 99.9399% ( 1) 00:47:01.285 5183.302 - 5213.091: 99.9412% ( 1) 00:47:01.285 5213.091 - 5242.880: 99.9425% ( 1) 00:47:01.285 5272.669 - 5302.458: 99.9438% ( 1) 00:47:01.286 5302.458 - 5332.247: 99.9451% ( 1) 00:47:01.286 5332.247 - 5362.036: 99.9464% ( 1) 00:47:01.286 5362.036 - 5391.825: 99.9477% ( 1) 00:47:01.286 5391.825 - 5421.615: 99.9490% ( 1) 00:47:01.286 5451.404 - 5481.193: 99.9503% ( 1) 00:47:01.286 5481.193 - 5510.982: 99.9516% ( 1) 00:47:01.286 5510.982 - 5540.771: 99.9529% ( 1) 00:47:01.286 5570.560 - 5600.349: 99.9542% ( 1) 00:47:01.286 5600.349 - 5630.138: 99.9555% ( 1) 00:47:01.286 5630.138 - 5659.927: 99.9569% ( 1) 00:47:01.286 5659.927 - 5689.716: 99.9582% ( 1) 00:47:01.286 5689.716 - 5719.505: 99.9595% ( 1) 00:47:01.286 5719.505 - 5749.295: 99.9608% ( 1) 00:47:01.286 5749.295 - 5779.084: 99.9621% ( 1) 00:47:01.286 5808.873 - 5838.662: 99.9634% ( 1) 00:47:01.286 5838.662 - 5868.451: 99.9647% ( 1) 00:47:01.286 5868.451 - 5898.240: 99.9660% ( 1) 00:47:01.286 5898.240 - 5928.029: 99.9673% ( 1) 00:47:01.286 5928.029 - 5957.818: 99.9686% ( 1) 00:47:01.286 5957.818 - 5987.607: 99.9699% ( 1) 00:47:01.286 5987.607 - 6017.396: 99.9712% ( 1) 00:47:01.286 6017.396 - 6047.185: 99.9725% ( 1) 00:47:01.286 6076.975 - 6106.764: 99.9738% ( 1) 00:47:01.286 6106.764 - 6136.553: 99.9752% ( 1) 00:47:01.286 6136.553 - 6166.342: 99.9765% ( 1) 00:47:01.286 6166.342 - 6196.131: 99.9778% ( 1) 00:47:01.286 6196.131 - 6225.920: 99.9791% ( 1) 00:47:01.286 6225.920 - 6255.709: 99.9804% ( 1) 00:47:01.286 6255.709 - 6285.498: 99.9817% ( 1) 00:47:01.286 6285.498 - 6315.287: 99.9830% ( 1) 00:47:01.286 6315.287 - 6345.076: 99.9843% ( 1) 00:47:01.286 6374.865 - 6404.655: 99.9856% ( 1) 00:47:01.286 6404.655 - 6434.444: 99.9869% ( 1) 00:47:01.286 6434.444 - 6464.233: 99.9882% ( 1) 00:47:01.286 6464.233 - 6494.022: 99.9895% ( 1) 00:47:01.286 6494.022 - 6523.811: 99.9908% ( 1) 00:47:01.286 6553.600 - 6583.389: 99.9922% ( 1) 00:47:01.286 6583.389 - 6613.178: 99.9935% ( 1) 00:47:01.286 6613.178 - 6642.967: 99.9948% ( 1) 00:47:01.286 6672.756 - 6702.545: 99.9961% ( 1) 00:47:01.286 6702.545 - 6732.335: 99.9974% ( 1) 00:47:01.286 6732.335 - 6762.124: 99.9987% ( 1) 00:47:01.286 6791.913 - 6821.702: 100.0000% ( 1) 00:47:01.286 00:47:01.286 09:13:36 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:47:02.661 Initializing NVMe Controllers 00:47:02.661 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:02.661 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:02.661 Initialization complete. Launching workers. 00:47:02.661 ======================================================== 00:47:02.661 Latency(us) 00:47:02.661 Device Information : IOPS MiB/s Average min max 00:47:02.661 PCIE (0000:00:10.0) NSID 1 from core 0: 62235.29 729.32 2055.75 667.92 9900.12 00:47:02.661 ======================================================== 00:47:02.661 Total : 62235.29 729.32 2055.75 667.92 9900.12 00:47:02.661 00:47:02.661 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:02.661 ================================================================================= 00:47:02.661 1.00000% : 1139.433us 00:47:02.661 10.00000% : 1392.640us 00:47:02.661 25.00000% : 1571.375us 00:47:02.661 50.00000% : 1861.818us 00:47:02.661 75.00000% : 2368.233us 00:47:02.661 90.00000% : 3053.382us 00:47:02.661 95.00000% : 3515.113us 00:47:02.661 98.00000% : 3961.949us 00:47:02.661 99.00000% : 4200.262us 00:47:02.661 99.50000% : 4378.996us 00:47:02.661 99.90000% : 5481.193us 00:47:02.661 99.99000% : 6762.124us 00:47:02.661 99.99900% : 9949.556us 00:47:02.661 99.99990% : 9949.556us 00:47:02.661 99.99999% : 9949.556us 00:47:02.661 00:47:02.661 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:47:02.661 ============================================================================== 00:47:02.661 Range in us Cumulative IO count 00:47:02.661 666.531 - 670.255: 0.0016% ( 1) 00:47:02.661 700.044 - 703.767: 0.0032% ( 1) 00:47:02.661 737.280 - 741.004: 0.0048% ( 1) 00:47:02.661 748.451 - 752.175: 0.0064% ( 1) 00:47:02.661 755.898 - 759.622: 0.0080% ( 1) 00:47:02.661 767.069 - 770.793: 0.0096% ( 1) 00:47:02.661 770.793 - 774.516: 0.0112% ( 1) 00:47:02.661 778.240 - 781.964: 0.0128% ( 1) 00:47:02.661 781.964 - 785.687: 0.0177% ( 3) 00:47:02.661 789.411 - 793.135: 0.0193% ( 1) 00:47:02.661 793.135 - 796.858: 0.0209% ( 1) 00:47:02.661 796.858 - 800.582: 0.0225% ( 1) 00:47:02.661 804.305 - 808.029: 0.0241% ( 1) 00:47:02.661 808.029 - 811.753: 0.0257% ( 1) 00:47:02.661 815.476 - 819.200: 0.0273% ( 1) 00:47:02.661 819.200 - 822.924: 0.0289% ( 1) 00:47:02.661 826.647 - 830.371: 0.0305% ( 1) 00:47:02.661 830.371 - 834.095: 0.0321% ( 1) 00:47:02.661 834.095 - 837.818: 0.0337% ( 1) 00:47:02.661 845.265 - 848.989: 0.0353% ( 1) 00:47:02.661 848.989 - 852.713: 0.0385% ( 2) 00:47:02.661 860.160 - 863.884: 0.0418% ( 2) 00:47:02.661 863.884 - 867.607: 0.0434% ( 1) 00:47:02.661 867.607 - 871.331: 0.0450% ( 1) 00:47:02.661 871.331 - 875.055: 0.0482% ( 2) 00:47:02.661 875.055 - 878.778: 0.0498% ( 1) 00:47:02.661 878.778 - 882.502: 0.0546% ( 3) 00:47:02.661 882.502 - 886.225: 0.0562% ( 1) 00:47:02.661 886.225 - 889.949: 0.0578% ( 1) 00:47:02.661 893.673 - 897.396: 0.0659% ( 5) 00:47:02.661 904.844 - 908.567: 0.0691% ( 2) 00:47:02.661 908.567 - 912.291: 0.0723% ( 2) 00:47:02.661 912.291 - 916.015: 0.0755% ( 2) 00:47:02.661 916.015 - 919.738: 0.0787% ( 2) 00:47:02.661 919.738 - 923.462: 0.0851% ( 4) 00:47:02.661 927.185 - 930.909: 0.0867% ( 1) 00:47:02.661 930.909 - 934.633: 0.0916% ( 3) 00:47:02.661 934.633 - 938.356: 0.0964% ( 3) 00:47:02.661 938.356 - 942.080: 0.0996% ( 2) 00:47:02.661 945.804 - 949.527: 0.1060% ( 4) 00:47:02.661 953.251 - 960.698: 0.1140% ( 5) 00:47:02.661 960.698 - 968.145: 0.1205% ( 4) 00:47:02.661 968.145 - 975.593: 0.1333% ( 8) 00:47:02.661 975.593 - 983.040: 0.1413% ( 5) 00:47:02.661 983.040 - 990.487: 0.1606% ( 12) 00:47:02.661 990.487 - 997.935: 0.1703% ( 6) 00:47:02.661 997.935 - 1005.382: 0.1799% ( 6) 00:47:02.661 1005.382 - 1012.829: 0.2008% ( 13) 00:47:02.661 1012.829 - 1020.276: 0.2168% ( 10) 00:47:02.661 1020.276 - 1027.724: 0.2473% ( 19) 00:47:02.661 1027.724 - 1035.171: 0.2698% ( 14) 00:47:02.661 1035.171 - 1042.618: 0.2971% ( 17) 00:47:02.661 1042.618 - 1050.065: 0.3405% ( 27) 00:47:02.661 1050.065 - 1057.513: 0.3726% ( 20) 00:47:02.661 1057.513 - 1064.960: 0.4128% ( 25) 00:47:02.661 1064.960 - 1072.407: 0.4497% ( 23) 00:47:02.661 1072.407 - 1079.855: 0.4979% ( 30) 00:47:02.661 1079.855 - 1087.302: 0.5461% ( 30) 00:47:02.661 1087.302 - 1094.749: 0.5830% ( 23) 00:47:02.661 1094.749 - 1102.196: 0.6666% ( 52) 00:47:02.661 1102.196 - 1109.644: 0.7228% ( 35) 00:47:02.661 1109.644 - 1117.091: 0.7886% ( 41) 00:47:02.661 1117.091 - 1124.538: 0.8705% ( 51) 00:47:02.661 1124.538 - 1131.985: 0.9428% ( 45) 00:47:02.661 1131.985 - 1139.433: 1.0312% ( 55) 00:47:02.661 1139.433 - 1146.880: 1.1163% ( 53) 00:47:02.661 1146.880 - 1154.327: 1.2175% ( 63) 00:47:02.661 1154.327 - 1161.775: 1.3219% ( 65) 00:47:02.661 1161.775 - 1169.222: 1.4247% ( 64) 00:47:02.661 1169.222 - 1176.669: 1.5628% ( 86) 00:47:02.661 1176.669 - 1184.116: 1.7025% ( 87) 00:47:02.661 1184.116 - 1191.564: 1.8375% ( 84) 00:47:02.661 1191.564 - 1199.011: 1.9900% ( 95) 00:47:02.661 1199.011 - 1206.458: 2.1426% ( 95) 00:47:02.661 1206.458 - 1213.905: 2.3322% ( 118) 00:47:02.661 1213.905 - 1221.353: 2.5120% ( 112) 00:47:02.661 1221.353 - 1228.800: 2.6807% ( 105) 00:47:02.661 1228.800 - 1236.247: 2.8879% ( 129) 00:47:02.661 1236.247 - 1243.695: 3.0806% ( 120) 00:47:02.661 1243.695 - 1251.142: 3.3183% ( 148) 00:47:02.661 1251.142 - 1258.589: 3.5721% ( 158) 00:47:02.661 1258.589 - 1266.036: 3.8082% ( 147) 00:47:02.661 1266.036 - 1273.484: 4.0572% ( 155) 00:47:02.661 1273.484 - 1280.931: 4.3238% ( 166) 00:47:02.661 1280.931 - 1288.378: 4.5631% ( 149) 00:47:02.661 1288.378 - 1295.825: 4.8474% ( 177) 00:47:02.661 1295.825 - 1303.273: 5.1542% ( 191) 00:47:02.661 1303.273 - 1310.720: 5.5043% ( 218) 00:47:02.661 1310.720 - 1318.167: 5.8432% ( 211) 00:47:02.661 1318.167 - 1325.615: 6.1902% ( 216) 00:47:02.661 1325.615 - 1333.062: 6.5676% ( 235) 00:47:02.661 1333.062 - 1340.509: 6.9515% ( 239) 00:47:02.661 1340.509 - 1347.956: 7.3852% ( 270) 00:47:02.661 1347.956 - 1355.404: 7.8044% ( 261) 00:47:02.661 1355.404 - 1362.851: 8.2252% ( 262) 00:47:02.661 1362.851 - 1370.298: 8.7151% ( 305) 00:47:02.661 1370.298 - 1377.745: 9.2049% ( 305) 00:47:02.661 1377.745 - 1385.193: 9.6900% ( 302) 00:47:02.661 1385.193 - 1392.640: 10.2024% ( 319) 00:47:02.661 1392.640 - 1400.087: 10.7533% ( 343) 00:47:02.661 1400.087 - 1407.535: 11.2432% ( 305) 00:47:02.661 1407.535 - 1414.982: 11.7957% ( 344) 00:47:02.661 1414.982 - 1422.429: 12.3338% ( 335) 00:47:02.661 1422.429 - 1429.876: 12.8783% ( 339) 00:47:02.661 1429.876 - 1437.324: 13.4966% ( 385) 00:47:02.661 1437.324 - 1444.771: 14.0845% ( 366) 00:47:02.661 1444.771 - 1452.218: 14.6499% ( 352) 00:47:02.662 1452.218 - 1459.665: 15.2602% ( 380) 00:47:02.662 1459.665 - 1467.113: 15.8930% ( 394) 00:47:02.662 1467.113 - 1474.560: 16.5500% ( 409) 00:47:02.662 1474.560 - 1482.007: 17.2406% ( 430) 00:47:02.662 1482.007 - 1489.455: 17.9040% ( 413) 00:47:02.662 1489.455 - 1496.902: 18.5930% ( 429) 00:47:02.662 1496.902 - 1504.349: 19.3013% ( 441) 00:47:02.662 1504.349 - 1511.796: 20.0610% ( 473) 00:47:02.662 1511.796 - 1519.244: 20.7083% ( 403) 00:47:02.662 1519.244 - 1526.691: 21.4231% ( 445) 00:47:02.662 1526.691 - 1534.138: 22.0880% ( 414) 00:47:02.662 1534.138 - 1541.585: 22.7498% ( 412) 00:47:02.662 1541.585 - 1549.033: 23.4790% ( 454) 00:47:02.662 1549.033 - 1556.480: 24.1439% ( 414) 00:47:02.662 1556.480 - 1563.927: 24.8008% ( 409) 00:47:02.662 1563.927 - 1571.375: 25.5124% ( 443) 00:47:02.662 1571.375 - 1578.822: 26.1805% ( 416) 00:47:02.662 1578.822 - 1586.269: 26.9274% ( 465) 00:47:02.662 1586.269 - 1593.716: 27.6100% ( 425) 00:47:02.662 1593.716 - 1601.164: 28.2862% ( 421) 00:47:02.662 1601.164 - 1608.611: 29.0026% ( 446) 00:47:02.662 1608.611 - 1616.058: 29.7302% ( 453) 00:47:02.662 1616.058 - 1623.505: 30.5284% ( 497) 00:47:02.662 1623.505 - 1630.953: 31.2833% ( 470) 00:47:02.662 1630.953 - 1638.400: 31.9322% ( 404) 00:47:02.662 1638.400 - 1645.847: 32.6405% ( 441) 00:47:02.662 1645.847 - 1653.295: 33.2942% ( 407) 00:47:02.662 1653.295 - 1660.742: 34.0668% ( 481) 00:47:02.662 1660.742 - 1668.189: 34.7093% ( 400) 00:47:02.662 1668.189 - 1675.636: 35.3662% ( 409) 00:47:02.662 1675.636 - 1683.084: 36.0022% ( 396) 00:47:02.662 1683.084 - 1690.531: 36.6833% ( 424) 00:47:02.662 1690.531 - 1697.978: 37.3964% ( 444) 00:47:02.662 1697.978 - 1705.425: 38.0565% ( 411) 00:47:02.662 1705.425 - 1712.873: 38.7440% ( 428) 00:47:02.662 1712.873 - 1720.320: 39.3848% ( 399) 00:47:02.662 1720.320 - 1727.767: 40.0257% ( 399) 00:47:02.662 1727.767 - 1735.215: 40.6714% ( 402) 00:47:02.662 1735.215 - 1742.662: 41.3395% ( 416) 00:47:02.662 1742.662 - 1750.109: 41.9933% ( 407) 00:47:02.662 1750.109 - 1757.556: 42.6181% ( 389) 00:47:02.662 1757.556 - 1765.004: 43.2027% ( 364) 00:47:02.662 1765.004 - 1772.451: 43.7857% ( 363) 00:47:02.662 1772.451 - 1779.898: 44.3270% ( 337) 00:47:02.662 1779.898 - 1787.345: 44.8346% ( 316) 00:47:02.662 1787.345 - 1794.793: 45.3807% ( 340) 00:47:02.662 1794.793 - 1802.240: 45.9444% ( 351) 00:47:02.662 1802.240 - 1809.687: 46.4873% ( 338) 00:47:02.662 1809.687 - 1817.135: 47.0334% ( 340) 00:47:02.662 1817.135 - 1824.582: 47.5410% ( 316) 00:47:02.662 1824.582 - 1832.029: 48.0565% ( 321) 00:47:02.662 1832.029 - 1839.476: 48.5818% ( 327) 00:47:02.662 1839.476 - 1846.924: 49.0973% ( 321) 00:47:02.662 1846.924 - 1854.371: 49.6515% ( 345) 00:47:02.662 1854.371 - 1861.818: 50.1686% ( 322) 00:47:02.662 1861.818 - 1869.265: 50.6762% ( 316) 00:47:02.662 1869.265 - 1876.713: 51.2255% ( 342) 00:47:02.662 1876.713 - 1884.160: 51.7170% ( 306) 00:47:02.662 1884.160 - 1891.607: 52.2085% ( 306) 00:47:02.662 1891.607 - 1899.055: 52.7996% ( 368) 00:47:02.662 1899.055 - 1906.502: 53.3987% ( 373) 00:47:02.662 1906.502 - 1921.396: 54.4298% ( 642) 00:47:02.662 1921.396 - 1936.291: 55.3983% ( 603) 00:47:02.662 1936.291 - 1951.185: 56.3492% ( 592) 00:47:02.662 1951.185 - 1966.080: 57.2599% ( 567) 00:47:02.662 1966.080 - 1980.975: 58.1947% ( 582) 00:47:02.662 1980.975 - 1995.869: 59.0845% ( 554) 00:47:02.662 1995.869 - 2010.764: 59.9631% ( 547) 00:47:02.662 2010.764 - 2025.658: 60.8368% ( 544) 00:47:02.662 2025.658 - 2040.553: 61.6672% ( 517) 00:47:02.662 2040.553 - 2055.447: 62.4831% ( 508) 00:47:02.662 2055.447 - 2070.342: 63.2750% ( 493) 00:47:02.662 2070.342 - 2085.236: 64.0202% ( 464) 00:47:02.662 2085.236 - 2100.131: 64.7350% ( 445) 00:47:02.662 2100.131 - 2115.025: 65.4642% ( 454) 00:47:02.662 2115.025 - 2129.920: 66.1500% ( 427) 00:47:02.662 2129.920 - 2144.815: 66.8102% ( 411) 00:47:02.662 2144.815 - 2159.709: 67.4815% ( 418) 00:47:02.662 2159.709 - 2174.604: 68.1529% ( 418) 00:47:02.662 2174.604 - 2189.498: 68.7986% ( 402) 00:47:02.662 2189.498 - 2204.393: 69.4009% ( 375) 00:47:02.662 2204.393 - 2219.287: 70.0145% ( 382) 00:47:02.662 2219.287 - 2234.182: 70.5606% ( 340) 00:47:02.662 2234.182 - 2249.076: 71.1821% ( 387) 00:47:02.662 2249.076 - 2263.971: 71.7106% ( 329) 00:47:02.662 2263.971 - 2278.865: 72.2502% ( 336) 00:47:02.662 2278.865 - 2293.760: 72.8493% ( 373) 00:47:02.662 2293.760 - 2308.655: 73.3778% ( 329) 00:47:02.662 2308.655 - 2323.549: 73.8934% ( 321) 00:47:02.662 2323.549 - 2338.444: 74.4186% ( 327) 00:47:02.662 2338.444 - 2353.338: 74.9149% ( 309) 00:47:02.662 2353.338 - 2368.233: 75.3534% ( 273) 00:47:02.662 2368.233 - 2383.127: 75.8497% ( 309) 00:47:02.662 2383.127 - 2398.022: 76.3138% ( 289) 00:47:02.662 2398.022 - 2412.916: 76.7588% ( 277) 00:47:02.662 2412.916 - 2427.811: 77.1667% ( 254) 00:47:02.662 2427.811 - 2442.705: 77.5924% ( 265) 00:47:02.662 2442.705 - 2457.600: 77.9810% ( 242) 00:47:02.662 2457.600 - 2472.495: 78.3376% ( 222) 00:47:02.662 2472.495 - 2487.389: 78.7231% ( 240) 00:47:02.662 2487.389 - 2502.284: 79.1343% ( 256) 00:47:02.662 2502.284 - 2517.178: 79.4860% ( 219) 00:47:02.662 2517.178 - 2532.073: 79.8458% ( 224) 00:47:02.662 2532.073 - 2546.967: 80.1686% ( 201) 00:47:02.662 2546.967 - 2561.862: 80.4578% ( 180) 00:47:02.662 2561.862 - 2576.756: 80.8015% ( 214) 00:47:02.662 2576.756 - 2591.651: 81.0810% ( 174) 00:47:02.662 2591.651 - 2606.545: 81.3652% ( 177) 00:47:02.662 2606.545 - 2621.440: 81.6752% ( 193) 00:47:02.662 2621.440 - 2636.335: 81.9643% ( 180) 00:47:02.662 2636.335 - 2651.229: 82.2567% ( 182) 00:47:02.662 2651.229 - 2666.124: 82.5522% ( 184) 00:47:02.662 2666.124 - 2681.018: 82.8702% ( 198) 00:47:02.662 2681.018 - 2695.913: 83.1770% ( 191) 00:47:02.662 2695.913 - 2710.807: 83.4484% ( 169) 00:47:02.662 2710.807 - 2725.702: 83.7359% ( 179) 00:47:02.662 2725.702 - 2740.596: 84.0218% ( 178) 00:47:02.662 2740.596 - 2755.491: 84.3158% ( 183) 00:47:02.662 2755.491 - 2770.385: 84.6065% ( 181) 00:47:02.662 2770.385 - 2785.280: 84.8940% ( 179) 00:47:02.662 2785.280 - 2800.175: 85.1574% ( 164) 00:47:02.662 2800.175 - 2815.069: 85.4321% ( 171) 00:47:02.662 2815.069 - 2829.964: 85.7356% ( 189) 00:47:02.662 2829.964 - 2844.858: 86.0135% ( 173) 00:47:02.662 2844.858 - 2859.753: 86.2528% ( 149) 00:47:02.662 2859.753 - 2874.647: 86.5114% ( 161) 00:47:02.662 2874.647 - 2889.542: 86.7620% ( 156) 00:47:02.662 2889.542 - 2904.436: 87.0238% ( 163) 00:47:02.662 2904.436 - 2919.331: 87.2695% ( 153) 00:47:02.662 2919.331 - 2934.225: 87.5040% ( 146) 00:47:02.662 2934.225 - 2949.120: 87.7915% ( 179) 00:47:02.662 2949.120 - 2964.015: 88.0871% ( 184) 00:47:02.662 2964.015 - 2978.909: 88.4260% ( 211) 00:47:02.662 2978.909 - 2993.804: 88.7857% ( 224) 00:47:02.662 2993.804 - 3008.698: 89.1744% ( 242) 00:47:02.662 3008.698 - 3023.593: 89.5808% ( 253) 00:47:02.662 3023.593 - 3038.487: 89.9711% ( 243) 00:47:02.662 3038.487 - 3053.382: 90.3244% ( 220) 00:47:02.662 3053.382 - 3068.276: 90.6248% ( 187) 00:47:02.662 3068.276 - 3083.171: 90.8625% ( 148) 00:47:02.662 3083.171 - 3098.065: 91.1050% ( 151) 00:47:02.662 3098.065 - 3112.960: 91.2898% ( 115) 00:47:02.662 3112.960 - 3127.855: 91.4712% ( 113) 00:47:02.662 3127.855 - 3142.749: 91.6624% ( 119) 00:47:02.662 3142.749 - 3157.644: 91.8198% ( 98) 00:47:02.662 3157.644 - 3172.538: 91.9820% ( 101) 00:47:02.662 3172.538 - 3187.433: 92.1523% ( 106) 00:47:02.662 3187.433 - 3202.327: 92.3000% ( 92) 00:47:02.662 3202.327 - 3217.222: 92.4510% ( 94) 00:47:02.662 3217.222 - 3232.116: 92.5988% ( 92) 00:47:02.662 3232.116 - 3247.011: 92.7530% ( 96) 00:47:02.662 3247.011 - 3261.905: 92.8959% ( 89) 00:47:02.662 3261.905 - 3276.800: 93.0308% ( 84) 00:47:02.662 3276.800 - 3291.695: 93.1866% ( 97) 00:47:02.662 3291.695 - 3306.589: 93.3328% ( 91) 00:47:02.662 3306.589 - 3321.484: 93.4517% ( 74) 00:47:02.662 3321.484 - 3336.378: 93.5850% ( 83) 00:47:02.662 3336.378 - 3351.273: 93.7102% ( 78) 00:47:02.662 3351.273 - 3366.167: 93.8452% ( 84) 00:47:02.662 3366.167 - 3381.062: 93.9624% ( 73) 00:47:02.663 3381.062 - 3395.956: 94.0861% ( 77) 00:47:02.663 3395.956 - 3410.851: 94.2226% ( 85) 00:47:02.663 3410.851 - 3425.745: 94.3688% ( 91) 00:47:02.663 3425.745 - 3440.640: 94.4651% ( 60) 00:47:02.663 3440.640 - 3455.535: 94.5840% ( 74) 00:47:02.663 3455.535 - 3470.429: 94.7205% ( 85) 00:47:02.663 3470.429 - 3485.324: 94.8249% ( 65) 00:47:02.663 3485.324 - 3500.218: 94.9518% ( 79) 00:47:02.663 3500.218 - 3515.113: 95.0691% ( 73) 00:47:02.663 3515.113 - 3530.007: 95.1927% ( 77) 00:47:02.663 3530.007 - 3544.902: 95.3036% ( 69) 00:47:02.663 3544.902 - 3559.796: 95.4096% ( 66) 00:47:02.663 3559.796 - 3574.691: 95.5252% ( 72) 00:47:02.663 3574.691 - 3589.585: 95.6136% ( 55) 00:47:02.663 3589.585 - 3604.480: 95.7212% ( 67) 00:47:02.663 3604.480 - 3619.375: 95.8256% ( 65) 00:47:02.663 3619.375 - 3634.269: 95.9396% ( 71) 00:47:02.663 3634.269 - 3649.164: 96.0665% ( 79) 00:47:02.663 3649.164 - 3664.058: 96.1580% ( 57) 00:47:02.663 3664.058 - 3678.953: 96.2608% ( 64) 00:47:02.663 3678.953 - 3693.847: 96.3685% ( 67) 00:47:02.663 3693.847 - 3708.742: 96.4793% ( 69) 00:47:02.663 3708.742 - 3723.636: 96.5981% ( 74) 00:47:02.663 3723.636 - 3738.531: 96.7202% ( 76) 00:47:02.663 3738.531 - 3753.425: 96.8391% ( 74) 00:47:02.663 3753.425 - 3768.320: 96.9386% ( 62) 00:47:02.663 3768.320 - 3783.215: 97.0318% ( 58) 00:47:02.663 3783.215 - 3798.109: 97.1185% ( 54) 00:47:02.663 3798.109 - 3813.004: 97.2181% ( 62) 00:47:02.663 3813.004 - 3842.793: 97.3868% ( 105) 00:47:02.663 3842.793 - 3872.582: 97.5426% ( 97) 00:47:02.663 3872.582 - 3902.371: 97.7128% ( 106) 00:47:02.663 3902.371 - 3932.160: 97.8702% ( 98) 00:47:02.663 3932.160 - 3961.949: 98.0228% ( 95) 00:47:02.663 3961.949 - 3991.738: 98.1738% ( 94) 00:47:02.663 3991.738 - 4021.527: 98.3312% ( 98) 00:47:02.663 4021.527 - 4051.316: 98.4709% ( 87) 00:47:02.663 4051.316 - 4081.105: 98.6075% ( 85) 00:47:02.663 4081.105 - 4110.895: 98.7408% ( 83) 00:47:02.663 4110.895 - 4140.684: 98.8628% ( 76) 00:47:02.663 4140.684 - 4170.473: 98.9688% ( 66) 00:47:02.663 4170.473 - 4200.262: 99.0716% ( 64) 00:47:02.663 4200.262 - 4230.051: 99.1712% ( 62) 00:47:02.663 4230.051 - 4259.840: 99.2547% ( 52) 00:47:02.663 4259.840 - 4289.629: 99.3286% ( 46) 00:47:02.663 4289.629 - 4319.418: 99.3913% ( 39) 00:47:02.663 4319.418 - 4349.207: 99.4507% ( 37) 00:47:02.663 4349.207 - 4378.996: 99.5069% ( 35) 00:47:02.663 4378.996 - 4408.785: 99.5583% ( 32) 00:47:02.663 4408.785 - 4438.575: 99.5985% ( 25) 00:47:02.663 4438.575 - 4468.364: 99.6322% ( 21) 00:47:02.663 4468.364 - 4498.153: 99.6611% ( 18) 00:47:02.663 4498.153 - 4527.942: 99.6868% ( 16) 00:47:02.663 4527.942 - 4557.731: 99.7061% ( 12) 00:47:02.663 4557.731 - 4587.520: 99.7221% ( 10) 00:47:02.663 4587.520 - 4617.309: 99.7350% ( 8) 00:47:02.663 4617.309 - 4647.098: 99.7430% ( 5) 00:47:02.663 4647.098 - 4676.887: 99.7494% ( 4) 00:47:02.663 4676.887 - 4706.676: 99.7559% ( 4) 00:47:02.663 4706.676 - 4736.465: 99.7623% ( 4) 00:47:02.663 4736.465 - 4766.255: 99.7687% ( 4) 00:47:02.663 4766.255 - 4796.044: 99.7767% ( 5) 00:47:02.663 4796.044 - 4825.833: 99.7832% ( 4) 00:47:02.663 4825.833 - 4855.622: 99.7896% ( 4) 00:47:02.663 4855.622 - 4885.411: 99.7960% ( 4) 00:47:02.663 4885.411 - 4915.200: 99.8024% ( 4) 00:47:02.663 4915.200 - 4944.989: 99.8073% ( 3) 00:47:02.663 4944.989 - 4974.778: 99.8121% ( 3) 00:47:02.663 4974.778 - 5004.567: 99.8169% ( 3) 00:47:02.663 5004.567 - 5034.356: 99.8233% ( 4) 00:47:02.663 5034.356 - 5064.145: 99.8314% ( 5) 00:47:02.663 5064.145 - 5093.935: 99.8362% ( 3) 00:47:02.663 5093.935 - 5123.724: 99.8442% ( 5) 00:47:02.663 5123.724 - 5153.513: 99.8474% ( 2) 00:47:02.663 5153.513 - 5183.302: 99.8522% ( 3) 00:47:02.663 5183.302 - 5213.091: 99.8571% ( 3) 00:47:02.663 5213.091 - 5242.880: 99.8635% ( 4) 00:47:02.663 5242.880 - 5272.669: 99.8683% ( 3) 00:47:02.663 5272.669 - 5302.458: 99.8731% ( 3) 00:47:02.663 5302.458 - 5332.247: 99.8763% ( 2) 00:47:02.663 5332.247 - 5362.036: 99.8811% ( 3) 00:47:02.663 5362.036 - 5391.825: 99.8860% ( 3) 00:47:02.663 5391.825 - 5421.615: 99.8908% ( 3) 00:47:02.663 5421.615 - 5451.404: 99.8956% ( 3) 00:47:02.663 5451.404 - 5481.193: 99.9020% ( 4) 00:47:02.663 5481.193 - 5510.982: 99.9068% ( 3) 00:47:02.663 5510.982 - 5540.771: 99.9117% ( 3) 00:47:02.663 5540.771 - 5570.560: 99.9133% ( 1) 00:47:02.663 5570.560 - 5600.349: 99.9165% ( 2) 00:47:02.663 5600.349 - 5630.138: 99.9197% ( 2) 00:47:02.663 5630.138 - 5659.927: 99.9213% ( 1) 00:47:02.663 5659.927 - 5689.716: 99.9245% ( 2) 00:47:02.663 5689.716 - 5719.505: 99.9293% ( 3) 00:47:02.663 5749.295 - 5779.084: 99.9325% ( 2) 00:47:02.663 5779.084 - 5808.873: 99.9341% ( 1) 00:47:02.663 5808.873 - 5838.662: 99.9374% ( 2) 00:47:02.663 5838.662 - 5868.451: 99.9390% ( 1) 00:47:02.663 5868.451 - 5898.240: 99.9406% ( 1) 00:47:02.663 5898.240 - 5928.029: 99.9438% ( 2) 00:47:02.663 5928.029 - 5957.818: 99.9454% ( 1) 00:47:02.663 5957.818 - 5987.607: 99.9470% ( 1) 00:47:02.663 5987.607 - 6017.396: 99.9486% ( 1) 00:47:02.663 6017.396 - 6047.185: 99.9550% ( 4) 00:47:02.663 6047.185 - 6076.975: 99.9566% ( 1) 00:47:02.663 6076.975 - 6106.764: 99.9582% ( 1) 00:47:02.663 6136.553 - 6166.342: 99.9598% ( 1) 00:47:02.663 6166.342 - 6196.131: 99.9615% ( 1) 00:47:02.663 6196.131 - 6225.920: 99.9631% ( 1) 00:47:02.663 6225.920 - 6255.709: 99.9663% ( 2) 00:47:02.663 6255.709 - 6285.498: 99.9679% ( 1) 00:47:02.663 6285.498 - 6315.287: 99.9695% ( 1) 00:47:02.663 6315.287 - 6345.076: 99.9711% ( 1) 00:47:02.663 6345.076 - 6374.865: 99.9727% ( 1) 00:47:02.663 6374.865 - 6404.655: 99.9759% ( 2) 00:47:02.663 6404.655 - 6434.444: 99.9775% ( 1) 00:47:02.663 6434.444 - 6464.233: 99.9791% ( 1) 00:47:02.663 6464.233 - 6494.022: 99.9807% ( 1) 00:47:02.663 6523.811 - 6553.600: 99.9823% ( 1) 00:47:02.663 6553.600 - 6583.389: 99.9839% ( 1) 00:47:02.663 6583.389 - 6613.178: 99.9855% ( 1) 00:47:02.663 6642.967 - 6672.756: 99.9872% ( 1) 00:47:02.663 6702.545 - 6732.335: 99.9888% ( 1) 00:47:02.663 6732.335 - 6762.124: 99.9920% ( 2) 00:47:02.663 6762.124 - 6791.913: 99.9952% ( 2) 00:47:02.663 6791.913 - 6821.702: 99.9968% ( 1) 00:47:02.663 9592.087 - 9651.665: 99.9984% ( 1) 00:47:02.663 9889.978 - 9949.556: 100.0000% ( 1) 00:47:02.663 00:47:02.663 09:13:37 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:47:02.663 ************************************ 00:47:02.663 END TEST nvme_perf 00:47:02.663 ************************************ 00:47:02.663 00:47:02.663 real 0m2.651s 00:47:02.663 user 0m2.237s 00:47:02.663 sys 0m0.273s 00:47:02.663 09:13:37 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:02.663 09:13:37 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:47:02.663 09:13:37 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:02.663 09:13:37 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:47:02.663 09:13:37 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:47:02.663 09:13:37 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:02.663 09:13:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:02.663 ************************************ 00:47:02.663 START TEST nvme_hello_world 00:47:02.663 ************************************ 00:47:02.663 09:13:37 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:47:02.921 Initializing NVMe Controllers 00:47:02.921 Attached to 0000:00:10.0 00:47:02.921 Namespace ID: 1 size: 5GB 00:47:02.921 Initialization complete. 00:47:02.921 INFO: using host memory buffer for IO 00:47:02.921 Hello world! 00:47:02.921 00:47:02.921 real 0m0.321s 00:47:02.921 user 0m0.131s 00:47:02.921 sys 0m0.114s 00:47:02.921 09:13:38 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:02.921 09:13:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:47:02.921 ************************************ 00:47:02.921 END TEST nvme_hello_world 00:47:02.921 ************************************ 00:47:03.179 09:13:38 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:03.179 09:13:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:47:03.179 09:13:38 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:03.179 09:13:38 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:03.179 09:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:03.179 ************************************ 00:47:03.179 START TEST nvme_sgl 00:47:03.179 ************************************ 00:47:03.179 09:13:38 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:47:03.437 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:47:03.437 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:47:03.437 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:47:03.437 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:47:03.437 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:47:03.437 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:47:03.437 NVMe Readv/Writev Request test 00:47:03.437 Attached to 0000:00:10.0 00:47:03.437 0000:00:10.0: build_io_request_2 test passed 00:47:03.437 0000:00:10.0: build_io_request_4 test passed 00:47:03.437 0000:00:10.0: build_io_request_5 test passed 00:47:03.437 0000:00:10.0: build_io_request_6 test passed 00:47:03.437 0000:00:10.0: build_io_request_7 test passed 00:47:03.437 0000:00:10.0: build_io_request_10 test passed 00:47:03.437 Cleaning up... 00:47:03.437 00:47:03.437 real 0m0.362s 00:47:03.437 user 0m0.164s 00:47:03.437 sys 0m0.126s 00:47:03.437 ************************************ 00:47:03.437 END TEST nvme_sgl 00:47:03.437 ************************************ 00:47:03.437 09:13:38 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:03.437 09:13:38 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:47:03.437 09:13:38 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:03.437 09:13:38 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:47:03.437 09:13:38 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:03.437 09:13:38 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:03.437 09:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:03.437 ************************************ 00:47:03.437 START TEST nvme_e2edp 00:47:03.437 ************************************ 00:47:03.437 09:13:38 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:47:03.695 NVMe Write/Read with End-to-End data protection test 00:47:03.695 Attached to 0000:00:10.0 00:47:03.695 Cleaning up... 00:47:03.695 00:47:03.695 real 0m0.322s 00:47:03.695 user 0m0.139s 00:47:03.695 sys 0m0.115s 00:47:03.695 09:13:38 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:03.695 ************************************ 00:47:03.695 END TEST nvme_e2edp 00:47:03.695 ************************************ 00:47:03.695 09:13:38 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:47:03.695 09:13:38 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:03.695 09:13:38 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:47:03.695 09:13:38 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:03.695 09:13:38 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:03.695 09:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:03.953 ************************************ 00:47:03.953 START TEST nvme_reserve 00:47:03.953 ************************************ 00:47:03.953 09:13:38 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:47:04.212 ===================================================== 00:47:04.212 NVMe Controller at PCI bus 0, device 16, function 0 00:47:04.212 ===================================================== 00:47:04.212 Reservations: Not Supported 00:47:04.212 Reservation test passed 00:47:04.212 00:47:04.212 real 0m0.327s 00:47:04.212 user 0m0.123s 00:47:04.212 sys 0m0.147s 00:47:04.212 09:13:39 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:04.212 ************************************ 00:47:04.212 END TEST nvme_reserve 00:47:04.212 ************************************ 00:47:04.212 09:13:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:47:04.212 09:13:39 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:04.212 09:13:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:47:04.212 09:13:39 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:04.212 09:13:39 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:04.212 09:13:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:04.212 ************************************ 00:47:04.212 START TEST nvme_err_injection 00:47:04.212 ************************************ 00:47:04.212 09:13:39 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:47:04.470 NVMe Error Injection test 00:47:04.470 Attached to 0000:00:10.0 00:47:04.470 0000:00:10.0: get features failed as expected 00:47:04.470 0000:00:10.0: get features successfully as expected 00:47:04.470 0000:00:10.0: read failed as expected 00:47:04.470 0000:00:10.0: read successfully as expected 00:47:04.470 Cleaning up... 00:47:04.470 00:47:04.470 real 0m0.329s 00:47:04.470 user 0m0.121s 00:47:04.470 sys 0m0.133s 00:47:04.470 ************************************ 00:47:04.470 END TEST nvme_err_injection 00:47:04.470 ************************************ 00:47:04.470 09:13:39 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:04.470 09:13:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:47:04.470 09:13:39 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:04.470 09:13:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:47:04.470 09:13:39 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:47:04.470 09:13:39 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:04.470 09:13:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:04.470 ************************************ 00:47:04.470 START TEST nvme_overhead 00:47:04.470 ************************************ 00:47:04.470 09:13:39 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:47:05.846 Initializing NVMe Controllers 00:47:05.846 Attached to 0000:00:10.0 00:47:05.846 Initialization complete. Launching workers. 00:47:05.846 submit (in ns) avg, min, max = 14951.1, 12767.3, 134977.3 00:47:05.846 complete (in ns) avg, min, max = 9558.6, 8485.9, 86031.4 00:47:05.846 00:47:05.846 Submit histogram 00:47:05.846 ================ 00:47:05.846 Range in us Cumulative Count 00:47:05.846 12.742 - 12.800: 0.0089% ( 1) 00:47:05.846 13.556 - 13.615: 0.0266% ( 2) 00:47:05.846 13.615 - 13.673: 0.1154% ( 10) 00:47:05.846 13.673 - 13.731: 0.8963% ( 88) 00:47:05.846 13.731 - 13.789: 3.2481% ( 265) 00:47:05.846 13.789 - 13.847: 7.8630% ( 520) 00:47:05.846 13.847 - 13.905: 13.8534% ( 675) 00:47:05.846 13.905 - 13.964: 21.0153% ( 807) 00:47:05.846 13.964 - 14.022: 29.4906% ( 955) 00:47:05.846 14.022 - 14.080: 38.7824% ( 1047) 00:47:05.846 14.080 - 14.138: 47.6216% ( 996) 00:47:05.846 14.138 - 14.196: 54.8722% ( 817) 00:47:05.846 14.196 - 14.255: 61.2709% ( 721) 00:47:05.846 14.255 - 14.313: 67.1725% ( 665) 00:47:05.846 14.313 - 14.371: 71.7962% ( 521) 00:47:05.846 14.371 - 14.429: 75.1065% ( 373) 00:47:05.846 14.429 - 14.487: 77.7068% ( 293) 00:47:05.846 14.487 - 14.545: 79.7568% ( 231) 00:47:05.846 14.545 - 14.604: 81.3543% ( 180) 00:47:05.846 14.604 - 14.662: 82.7210% ( 154) 00:47:05.847 14.662 - 14.720: 83.9457% ( 138) 00:47:05.847 14.720 - 14.778: 84.9752% ( 116) 00:47:05.847 14.778 - 14.836: 85.7916% ( 92) 00:47:05.847 14.836 - 14.895: 86.5282% ( 83) 00:47:05.847 14.895 - 15.011: 87.9304% ( 158) 00:47:05.847 15.011 - 15.127: 88.9776% ( 118) 00:47:05.847 15.127 - 15.244: 89.6610% ( 77) 00:47:05.847 15.244 - 15.360: 90.1402% ( 54) 00:47:05.847 15.360 - 15.476: 90.4686% ( 37) 00:47:05.847 15.476 - 15.593: 90.7082% ( 27) 00:47:05.847 15.593 - 15.709: 90.8324% ( 14) 00:47:05.847 15.709 - 15.825: 90.9478% ( 13) 00:47:05.847 15.825 - 15.942: 91.1342% ( 21) 00:47:05.847 15.942 - 16.058: 91.2584% ( 14) 00:47:05.847 16.058 - 16.175: 91.2851% ( 3) 00:47:05.847 16.175 - 16.291: 91.3294% ( 5) 00:47:05.847 16.291 - 16.407: 91.3561% ( 3) 00:47:05.847 16.407 - 16.524: 91.3827% ( 3) 00:47:05.847 16.524 - 16.640: 91.4448% ( 7) 00:47:05.847 16.640 - 16.756: 91.4625% ( 2) 00:47:05.847 16.756 - 16.873: 91.4980% ( 4) 00:47:05.847 16.873 - 16.989: 91.5158% ( 2) 00:47:05.847 16.989 - 17.105: 91.5602% ( 5) 00:47:05.847 17.105 - 17.222: 91.5779% ( 2) 00:47:05.847 17.222 - 17.338: 91.5868% ( 1) 00:47:05.847 17.338 - 17.455: 91.6223% ( 4) 00:47:05.847 17.455 - 17.571: 91.6400% ( 2) 00:47:05.847 17.571 - 17.687: 91.6578% ( 2) 00:47:05.847 17.687 - 17.804: 91.6933% ( 4) 00:47:05.847 17.804 - 17.920: 91.7288% ( 4) 00:47:05.847 17.920 - 18.036: 91.7554% ( 3) 00:47:05.847 18.036 - 18.153: 91.7820% ( 3) 00:47:05.847 18.385 - 18.502: 91.7998% ( 2) 00:47:05.847 18.618 - 18.735: 91.8087% ( 1) 00:47:05.847 18.851 - 18.967: 91.8175% ( 1) 00:47:05.847 18.967 - 19.084: 91.8619% ( 5) 00:47:05.847 19.084 - 19.200: 91.9240% ( 7) 00:47:05.847 19.200 - 19.316: 91.9684% ( 5) 00:47:05.847 19.316 - 19.433: 92.0483% ( 9) 00:47:05.847 19.433 - 19.549: 92.0838% ( 4) 00:47:05.847 19.549 - 19.665: 92.1370% ( 6) 00:47:05.847 19.665 - 19.782: 92.1636% ( 3) 00:47:05.847 19.782 - 19.898: 92.2080% ( 5) 00:47:05.847 19.898 - 20.015: 92.2968% ( 10) 00:47:05.847 20.015 - 20.131: 92.4831% ( 21) 00:47:05.847 20.131 - 20.247: 92.6695% ( 21) 00:47:05.847 20.247 - 20.364: 92.8115% ( 16) 00:47:05.847 20.364 - 20.480: 93.0156% ( 23) 00:47:05.847 20.480 - 20.596: 93.1665% ( 17) 00:47:05.847 20.596 - 20.713: 93.4150% ( 28) 00:47:05.847 20.713 - 20.829: 93.6546% ( 27) 00:47:05.847 20.829 - 20.945: 93.9563% ( 34) 00:47:05.847 20.945 - 21.062: 94.2137% ( 29) 00:47:05.847 21.062 - 21.178: 94.4799% ( 30) 00:47:05.847 21.178 - 21.295: 94.6574% ( 20) 00:47:05.847 21.295 - 21.411: 94.8704% ( 24) 00:47:05.847 21.411 - 21.527: 95.1012% ( 26) 00:47:05.847 21.527 - 21.644: 95.3408% ( 27) 00:47:05.847 21.644 - 21.760: 95.6958% ( 40) 00:47:05.847 21.760 - 21.876: 95.9176% ( 25) 00:47:05.847 21.876 - 21.993: 96.1218% ( 23) 00:47:05.847 21.993 - 22.109: 96.2815% ( 18) 00:47:05.847 22.109 - 22.225: 96.4945% ( 24) 00:47:05.847 22.225 - 22.342: 96.6897% ( 22) 00:47:05.847 22.342 - 22.458: 96.8672% ( 20) 00:47:05.847 22.458 - 22.575: 96.9915% ( 14) 00:47:05.847 22.575 - 22.691: 97.1867% ( 22) 00:47:05.847 22.691 - 22.807: 97.3465% ( 18) 00:47:05.847 22.807 - 22.924: 97.5417% ( 22) 00:47:05.847 22.924 - 23.040: 97.6571% ( 13) 00:47:05.847 23.040 - 23.156: 97.7458% ( 10) 00:47:05.847 23.156 - 23.273: 97.8523% ( 12) 00:47:05.847 23.273 - 23.389: 97.9588% ( 12) 00:47:05.847 23.389 - 23.505: 98.1008% ( 16) 00:47:05.847 23.505 - 23.622: 98.1452% ( 5) 00:47:05.847 23.622 - 23.738: 98.2339% ( 10) 00:47:05.847 23.738 - 23.855: 98.3759% ( 16) 00:47:05.847 23.855 - 23.971: 98.4469% ( 8) 00:47:05.847 23.971 - 24.087: 98.4913% ( 5) 00:47:05.847 24.087 - 24.204: 98.5889% ( 11) 00:47:05.847 24.204 - 24.320: 98.6333% ( 5) 00:47:05.847 24.320 - 24.436: 98.7309% ( 11) 00:47:05.847 24.436 - 24.553: 98.7575% ( 3) 00:47:05.847 24.553 - 24.669: 98.8374% ( 9) 00:47:05.847 24.669 - 24.785: 98.9173% ( 9) 00:47:05.847 24.785 - 24.902: 98.9883% ( 8) 00:47:05.847 24.902 - 25.018: 99.0593% ( 8) 00:47:05.847 25.018 - 25.135: 99.0948% ( 4) 00:47:05.847 25.135 - 25.251: 99.1569% ( 7) 00:47:05.847 25.251 - 25.367: 99.1924% ( 4) 00:47:05.847 25.367 - 25.484: 99.2634% ( 8) 00:47:05.847 25.484 - 25.600: 99.3078% ( 5) 00:47:05.847 25.600 - 25.716: 99.3344% ( 3) 00:47:05.847 25.716 - 25.833: 99.3433% ( 1) 00:47:05.847 25.833 - 25.949: 99.3788% ( 4) 00:47:05.847 25.949 - 26.065: 99.3876% ( 1) 00:47:05.847 26.182 - 26.298: 99.4231% ( 4) 00:47:05.847 26.298 - 26.415: 99.4409% ( 2) 00:47:05.847 26.415 - 26.531: 99.4675% ( 3) 00:47:05.847 26.531 - 26.647: 99.4853% ( 2) 00:47:05.847 26.647 - 26.764: 99.4941% ( 1) 00:47:05.847 26.764 - 26.880: 99.5296% ( 4) 00:47:05.847 26.880 - 26.996: 99.5474% ( 2) 00:47:05.847 26.996 - 27.113: 99.5651% ( 2) 00:47:05.847 27.229 - 27.345: 99.5829% ( 2) 00:47:05.847 27.345 - 27.462: 99.6006% ( 2) 00:47:05.847 27.462 - 27.578: 99.6095% ( 1) 00:47:05.847 27.695 - 27.811: 99.6273% ( 2) 00:47:05.847 27.927 - 28.044: 99.6450% ( 2) 00:47:05.847 28.044 - 28.160: 99.6539% ( 1) 00:47:05.847 28.625 - 28.742: 99.6628% ( 1) 00:47:05.847 28.742 - 28.858: 99.6716% ( 1) 00:47:05.847 29.556 - 29.673: 99.6894% ( 2) 00:47:05.847 29.673 - 29.789: 99.6983% ( 1) 00:47:05.847 29.789 - 30.022: 99.7071% ( 1) 00:47:05.847 30.022 - 30.255: 99.7160% ( 1) 00:47:05.847 30.255 - 30.487: 99.7249% ( 1) 00:47:05.847 30.487 - 30.720: 99.7338% ( 1) 00:47:05.847 30.953 - 31.185: 99.7515% ( 2) 00:47:05.847 31.651 - 31.884: 99.7693% ( 2) 00:47:05.847 32.116 - 32.349: 99.7781% ( 1) 00:47:05.847 32.349 - 32.582: 99.7870% ( 1) 00:47:05.847 32.582 - 32.815: 99.8048% ( 2) 00:47:05.847 32.815 - 33.047: 99.8136% ( 1) 00:47:05.847 33.513 - 33.745: 99.8225% ( 1) 00:47:05.847 33.978 - 34.211: 99.8314% ( 1) 00:47:05.847 34.211 - 34.444: 99.8491% ( 2) 00:47:05.847 35.840 - 36.073: 99.8580% ( 1) 00:47:05.847 37.469 - 37.702: 99.8669% ( 1) 00:47:05.847 40.262 - 40.495: 99.8758% ( 1) 00:47:05.847 40.727 - 40.960: 99.8846% ( 1) 00:47:05.847 40.960 - 41.193: 99.9024% ( 2) 00:47:05.847 41.891 - 42.124: 99.9113% ( 1) 00:47:05.847 43.985 - 44.218: 99.9201% ( 1) 00:47:05.847 52.829 - 53.062: 99.9290% ( 1) 00:47:05.847 61.905 - 62.371: 99.9379% ( 1) 00:47:05.847 63.767 - 64.233: 99.9468% ( 1) 00:47:05.847 70.284 - 70.749: 99.9556% ( 1) 00:47:05.847 75.869 - 76.335: 99.9645% ( 1) 00:47:05.847 86.575 - 87.040: 99.9734% ( 1) 00:47:05.847 110.313 - 110.778: 99.9823% ( 1) 00:47:05.847 128.465 - 129.396: 99.9911% ( 1) 00:47:05.847 134.051 - 134.982: 100.0000% ( 1) 00:47:05.847 00:47:05.847 Complete histogram 00:47:05.847 ================== 00:47:05.847 Range in us Cumulative Count 00:47:05.847 8.436 - 8.495: 0.0089% ( 1) 00:47:05.847 8.495 - 8.553: 0.1686% ( 18) 00:47:05.847 8.553 - 8.611: 1.8016% ( 184) 00:47:05.847 8.611 - 8.669: 7.5612% ( 649) 00:47:05.847 8.669 - 8.727: 18.9563% ( 1284) 00:47:05.847 8.727 - 8.785: 37.1672% ( 2052) 00:47:05.847 8.785 - 8.844: 54.9343% ( 2002) 00:47:05.847 8.844 - 8.902: 67.3411% ( 1398) 00:47:05.847 8.902 - 8.960: 74.6805% ( 827) 00:47:05.847 8.960 - 9.018: 78.6475% ( 447) 00:47:05.847 9.018 - 9.076: 81.0437% ( 270) 00:47:05.847 9.076 - 9.135: 82.2772% ( 139) 00:47:05.847 9.135 - 9.193: 83.1203% ( 95) 00:47:05.847 9.193 - 9.251: 83.6084% ( 55) 00:47:05.847 9.251 - 9.309: 84.1587% ( 62) 00:47:05.847 9.309 - 9.367: 84.6823% ( 59) 00:47:05.847 9.367 - 9.425: 85.0018% ( 36) 00:47:05.847 9.425 - 9.484: 85.3479% ( 39) 00:47:05.847 9.484 - 9.542: 85.5609% ( 24) 00:47:05.847 9.542 - 9.600: 85.7295% ( 19) 00:47:05.847 9.600 - 9.658: 85.9869% ( 29) 00:47:05.847 9.658 - 9.716: 86.3064% ( 36) 00:47:05.847 9.716 - 9.775: 86.5282% ( 25) 00:47:05.847 9.775 - 9.833: 86.7412% ( 24) 00:47:05.847 9.833 - 9.891: 87.0163% ( 31) 00:47:05.847 9.891 - 9.949: 87.1672% ( 17) 00:47:05.847 9.949 - 10.007: 87.2648% ( 11) 00:47:05.847 10.007 - 10.065: 87.4423% ( 20) 00:47:05.847 10.065 - 10.124: 87.5488% ( 12) 00:47:05.847 10.124 - 10.182: 87.5843% ( 4) 00:47:05.847 10.182 - 10.240: 87.6553% ( 8) 00:47:05.847 10.240 - 10.298: 87.6731% ( 2) 00:47:05.847 10.298 - 10.356: 87.7086% ( 4) 00:47:05.847 10.356 - 10.415: 87.7529% ( 5) 00:47:05.847 10.415 - 10.473: 87.8594% ( 12) 00:47:05.847 10.473 - 10.531: 88.0635% ( 23) 00:47:05.847 10.531 - 10.589: 88.3032% ( 27) 00:47:05.847 10.589 - 10.647: 88.5339% ( 26) 00:47:05.847 10.647 - 10.705: 88.7469% ( 24) 00:47:05.847 10.705 - 10.764: 88.9333% ( 21) 00:47:05.847 10.764 - 10.822: 89.1640% ( 26) 00:47:05.847 10.822 - 10.880: 89.4657% ( 34) 00:47:05.847 10.880 - 10.938: 89.7941% ( 37) 00:47:05.847 10.938 - 10.996: 90.1846% ( 44) 00:47:05.847 10.996 - 11.055: 90.5396% ( 40) 00:47:05.847 11.055 - 11.113: 90.9034% ( 41) 00:47:05.848 11.113 - 11.171: 91.1431% ( 27) 00:47:05.848 11.171 - 11.229: 91.3206% ( 20) 00:47:05.848 11.229 - 11.287: 91.4625% ( 16) 00:47:05.848 11.287 - 11.345: 91.5868% ( 14) 00:47:05.848 11.345 - 11.404: 91.6489% ( 7) 00:47:05.848 11.404 - 11.462: 91.7022% ( 6) 00:47:05.848 11.462 - 11.520: 91.7110% ( 1) 00:47:05.848 11.520 - 11.578: 91.7465% ( 4) 00:47:05.848 11.578 - 11.636: 91.7732% ( 3) 00:47:05.848 11.636 - 11.695: 91.8087% ( 4) 00:47:05.848 11.695 - 11.753: 91.8264% ( 2) 00:47:05.848 11.753 - 11.811: 91.8353% ( 1) 00:47:05.848 11.811 - 11.869: 91.8708% ( 4) 00:47:05.848 11.869 - 11.927: 91.8885% ( 2) 00:47:05.848 11.927 - 11.985: 91.9063% ( 2) 00:47:05.848 11.985 - 12.044: 91.9240% ( 2) 00:47:05.848 12.044 - 12.102: 91.9507% ( 3) 00:47:05.848 12.102 - 12.160: 91.9684% ( 2) 00:47:05.848 12.160 - 12.218: 91.9773% ( 1) 00:47:05.848 12.276 - 12.335: 91.9950% ( 2) 00:47:05.848 12.393 - 12.451: 92.0128% ( 2) 00:47:05.848 12.451 - 12.509: 92.0217% ( 1) 00:47:05.848 12.509 - 12.567: 92.0483% ( 3) 00:47:05.848 12.567 - 12.625: 92.0838% ( 4) 00:47:05.848 12.625 - 12.684: 92.1104% ( 3) 00:47:05.848 12.684 - 12.742: 92.1370% ( 3) 00:47:05.848 12.742 - 12.800: 92.1548% ( 2) 00:47:05.848 12.800 - 12.858: 92.1636% ( 1) 00:47:05.848 12.858 - 12.916: 92.2169% ( 6) 00:47:05.848 12.916 - 12.975: 92.2701% ( 6) 00:47:05.848 12.975 - 13.033: 92.2968% ( 3) 00:47:05.848 13.033 - 13.091: 92.3411% ( 5) 00:47:05.848 13.091 - 13.149: 92.3855% ( 5) 00:47:05.848 13.149 - 13.207: 92.4299% ( 5) 00:47:05.848 13.207 - 13.265: 92.4388% ( 1) 00:47:05.848 13.265 - 13.324: 92.4565% ( 2) 00:47:05.848 13.324 - 13.382: 92.5009% ( 5) 00:47:05.848 13.382 - 13.440: 92.5453% ( 5) 00:47:05.848 13.440 - 13.498: 92.5985% ( 6) 00:47:05.848 13.498 - 13.556: 92.6606% ( 7) 00:47:05.848 13.556 - 13.615: 92.6961% ( 4) 00:47:05.848 13.615 - 13.673: 92.7583% ( 7) 00:47:05.848 13.673 - 13.731: 92.8026% ( 5) 00:47:05.848 13.731 - 13.789: 92.8559% ( 6) 00:47:05.848 13.789 - 13.847: 92.9091% ( 6) 00:47:05.848 13.847 - 13.905: 92.9890% ( 9) 00:47:05.848 13.905 - 13.964: 93.0511% ( 7) 00:47:05.848 13.964 - 14.022: 93.1665% ( 13) 00:47:05.848 14.022 - 14.080: 93.2730% ( 12) 00:47:05.848 14.080 - 14.138: 93.3972% ( 14) 00:47:05.848 14.138 - 14.196: 93.5126% ( 13) 00:47:05.848 14.196 - 14.255: 93.5747% ( 7) 00:47:05.848 14.255 - 14.313: 93.6546% ( 9) 00:47:05.848 14.313 - 14.371: 93.7966% ( 16) 00:47:05.848 14.371 - 14.429: 93.9120% ( 13) 00:47:05.848 14.429 - 14.487: 94.0007% ( 10) 00:47:05.848 14.487 - 14.545: 94.0806% ( 9) 00:47:05.848 14.545 - 14.604: 94.1871% ( 12) 00:47:05.848 14.604 - 14.662: 94.2847% ( 11) 00:47:05.848 14.662 - 14.720: 94.3557% ( 8) 00:47:05.848 14.720 - 14.778: 94.4267% ( 8) 00:47:05.848 14.778 - 14.836: 94.5598% ( 15) 00:47:05.848 14.836 - 14.895: 94.6663% ( 12) 00:47:05.848 14.895 - 15.011: 94.8882% ( 25) 00:47:05.848 15.011 - 15.127: 95.2609% ( 42) 00:47:05.848 15.127 - 15.244: 95.5538% ( 33) 00:47:05.848 15.244 - 15.360: 95.7845% ( 26) 00:47:05.848 15.360 - 15.476: 96.0951% ( 35) 00:47:05.848 15.476 - 15.593: 96.3703% ( 31) 00:47:05.848 15.593 - 15.709: 96.5832% ( 24) 00:47:05.848 15.709 - 15.825: 96.7874% ( 23) 00:47:05.848 15.825 - 15.942: 97.0536% ( 30) 00:47:05.848 15.942 - 16.058: 97.2400% ( 21) 00:47:05.848 16.058 - 16.175: 97.3642% ( 14) 00:47:05.848 16.175 - 16.291: 97.4707% ( 12) 00:47:05.848 16.291 - 16.407: 97.6482% ( 20) 00:47:05.848 16.407 - 16.524: 97.7547% ( 12) 00:47:05.848 16.524 - 16.640: 97.8789% ( 14) 00:47:05.848 16.640 - 16.756: 98.0209% ( 16) 00:47:05.848 16.756 - 16.873: 98.1274% ( 12) 00:47:05.848 16.873 - 16.989: 98.2073% ( 9) 00:47:05.848 16.989 - 17.105: 98.2961% ( 10) 00:47:05.848 17.105 - 17.222: 98.3937% ( 11) 00:47:05.848 17.222 - 17.338: 98.4558% ( 7) 00:47:05.848 17.338 - 17.455: 98.5179% ( 7) 00:47:05.848 17.455 - 17.571: 98.5623% ( 5) 00:47:05.848 17.571 - 17.687: 98.6422% ( 9) 00:47:05.848 17.687 - 17.804: 98.7487% ( 12) 00:47:05.848 17.804 - 17.920: 98.7930% ( 5) 00:47:05.848 17.920 - 18.036: 98.8463% ( 6) 00:47:05.848 18.036 - 18.153: 98.9173% ( 8) 00:47:05.848 18.153 - 18.269: 98.9350% ( 2) 00:47:05.848 18.269 - 18.385: 99.0060% ( 8) 00:47:05.848 18.385 - 18.502: 99.0593% ( 6) 00:47:05.848 18.502 - 18.618: 99.0859% ( 3) 00:47:05.848 18.618 - 18.735: 99.1125% ( 3) 00:47:05.848 18.735 - 18.851: 99.1480% ( 4) 00:47:05.848 18.851 - 18.967: 99.2102% ( 7) 00:47:05.848 18.967 - 19.084: 99.2457% ( 4) 00:47:05.848 19.084 - 19.200: 99.2545% ( 1) 00:47:05.848 19.200 - 19.316: 99.2900% ( 4) 00:47:05.848 19.316 - 19.433: 99.3255% ( 4) 00:47:05.848 19.433 - 19.549: 99.3699% ( 5) 00:47:05.848 19.549 - 19.665: 99.4143% ( 5) 00:47:05.848 19.665 - 19.782: 99.4409% ( 3) 00:47:05.848 19.782 - 19.898: 99.4764% ( 4) 00:47:05.848 20.015 - 20.131: 99.4941% ( 2) 00:47:05.848 20.131 - 20.247: 99.5030% ( 1) 00:47:05.848 20.247 - 20.364: 99.5208% ( 2) 00:47:05.848 20.364 - 20.480: 99.5474% ( 3) 00:47:05.848 20.596 - 20.713: 99.5563% ( 1) 00:47:05.848 20.945 - 21.062: 99.5651% ( 1) 00:47:05.848 21.062 - 21.178: 99.5829% ( 2) 00:47:05.848 21.178 - 21.295: 99.5918% ( 1) 00:47:05.848 21.527 - 21.644: 99.6006% ( 1) 00:47:05.848 21.993 - 22.109: 99.6095% ( 1) 00:47:05.848 22.109 - 22.225: 99.6184% ( 1) 00:47:05.848 22.225 - 22.342: 99.6273% ( 1) 00:47:05.848 22.342 - 22.458: 99.6450% ( 2) 00:47:05.848 22.807 - 22.924: 99.6539% ( 1) 00:47:05.848 23.505 - 23.622: 99.6628% ( 1) 00:47:05.848 23.738 - 23.855: 99.6716% ( 1) 00:47:05.848 24.087 - 24.204: 99.6805% ( 1) 00:47:05.848 24.553 - 24.669: 99.6894% ( 1) 00:47:05.848 25.251 - 25.367: 99.6983% ( 1) 00:47:05.848 25.484 - 25.600: 99.7071% ( 1) 00:47:05.848 26.182 - 26.298: 99.7160% ( 1) 00:47:05.848 26.647 - 26.764: 99.7249% ( 1) 00:47:05.848 26.880 - 26.996: 99.7338% ( 1) 00:47:05.848 27.229 - 27.345: 99.7426% ( 1) 00:47:05.848 27.578 - 27.695: 99.7515% ( 1) 00:47:05.848 27.695 - 27.811: 99.7604% ( 1) 00:47:05.848 27.811 - 27.927: 99.7693% ( 1) 00:47:05.848 28.742 - 28.858: 99.7781% ( 1) 00:47:05.848 28.975 - 29.091: 99.7870% ( 1) 00:47:05.848 29.207 - 29.324: 99.7959% ( 1) 00:47:05.848 29.673 - 29.789: 99.8048% ( 1) 00:47:05.848 30.720 - 30.953: 99.8136% ( 1) 00:47:05.848 31.651 - 31.884: 99.8225% ( 1) 00:47:05.848 32.815 - 33.047: 99.8314% ( 1) 00:47:05.848 33.280 - 33.513: 99.8491% ( 2) 00:47:05.848 33.513 - 33.745: 99.8580% ( 1) 00:47:05.848 33.745 - 33.978: 99.8669% ( 1) 00:47:05.848 39.331 - 39.564: 99.8758% ( 1) 00:47:05.848 40.029 - 40.262: 99.8846% ( 1) 00:47:05.848 41.658 - 41.891: 99.8935% ( 1) 00:47:05.848 42.822 - 43.055: 99.9113% ( 2) 00:47:05.848 43.055 - 43.287: 99.9201% ( 1) 00:47:05.848 45.847 - 46.080: 99.9290% ( 1) 00:47:05.848 47.476 - 47.709: 99.9379% ( 1) 00:47:05.848 50.036 - 50.269: 99.9556% ( 2) 00:47:05.848 50.735 - 50.967: 99.9645% ( 1) 00:47:05.848 55.855 - 56.087: 99.9734% ( 1) 00:47:05.848 58.415 - 58.647: 99.9823% ( 1) 00:47:05.848 59.578 - 60.044: 99.9911% ( 1) 00:47:05.848 85.644 - 86.109: 100.0000% ( 1) 00:47:05.848 00:47:05.848 00:47:05.848 real 0m1.312s 00:47:05.848 user 0m1.135s 00:47:05.848 sys 0m0.100s 00:47:05.848 09:13:40 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:05.848 ************************************ 00:47:05.848 END TEST nvme_overhead 00:47:05.848 ************************************ 00:47:05.848 09:13:40 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:47:05.848 09:13:40 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:05.848 09:13:40 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:47:05.848 09:13:40 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:47:05.848 09:13:40 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:05.848 09:13:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:05.848 ************************************ 00:47:05.848 START TEST nvme_arbitration 00:47:05.848 ************************************ 00:47:05.848 09:13:40 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:47:10.032 Initializing NVMe Controllers 00:47:10.032 Attached to 0000:00:10.0 00:47:10.032 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:47:10.032 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:47:10.032 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:47:10.032 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:47:10.032 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:47:10.032 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:47:10.032 Initialization complete. Launching workers. 00:47:10.032 Starting thread on core 1 with urgent priority queue 00:47:10.032 Starting thread on core 2 with urgent priority queue 00:47:10.032 Starting thread on core 3 with urgent priority queue 00:47:10.032 Starting thread on core 0 with urgent priority queue 00:47:10.032 QEMU NVMe Ctrl (12340 ) core 0: 1194.67 IO/s 83.71 secs/100000 ios 00:47:10.032 QEMU NVMe Ctrl (12340 ) core 1: 1408.00 IO/s 71.02 secs/100000 ios 00:47:10.032 QEMU NVMe Ctrl (12340 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:47:10.032 QEMU NVMe Ctrl (12340 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:47:10.032 ======================================================== 00:47:10.032 00:47:10.032 00:47:10.032 real 0m3.428s 00:47:10.032 user 0m9.407s 00:47:10.032 sys 0m0.136s 00:47:10.032 09:13:44 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:10.032 ************************************ 00:47:10.032 END TEST nvme_arbitration 00:47:10.032 ************************************ 00:47:10.032 09:13:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:10.032 09:13:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:10.032 ************************************ 00:47:10.032 START TEST nvme_single_aen 00:47:10.032 ************************************ 00:47:10.032 09:13:44 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:47:10.032 Asynchronous Event Request test 00:47:10.032 Attached to 0000:00:10.0 00:47:10.032 Reset controller to setup AER completions for this process 00:47:10.032 Registering asynchronous event callbacks... 00:47:10.032 Getting orig temperature thresholds of all controllers 00:47:10.032 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:47:10.032 Setting all controllers temperature threshold low to trigger AER 00:47:10.032 Waiting for all controllers temperature threshold to be set lower 00:47:10.032 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:47:10.032 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:47:10.032 Waiting for all controllers to trigger AER and reset threshold 00:47:10.032 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:47:10.032 Cleaning up... 00:47:10.032 00:47:10.032 real 0m0.316s 00:47:10.032 user 0m0.102s 00:47:10.032 sys 0m0.157s 00:47:10.032 09:13:44 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:10.032 ************************************ 00:47:10.032 END TEST nvme_single_aen 00:47:10.032 ************************************ 00:47:10.032 09:13:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:10.032 09:13:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:10.032 09:13:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:10.032 ************************************ 00:47:10.032 START TEST nvme_doorbell_aers 00:47:10.032 ************************************ 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:47:10.032 09:13:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:47:10.033 09:13:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:47:10.033 [2024-07-12 09:13:45.169601] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175296) is not found. Dropping the request. 00:47:19.998 Executing: test_write_invalid_db 00:47:19.998 Waiting for AER completion... 00:47:19.998 Failure: test_write_invalid_db 00:47:19.998 00:47:19.998 Executing: test_invalid_db_write_overflow_sq 00:47:19.998 Waiting for AER completion... 00:47:19.998 Failure: test_invalid_db_write_overflow_sq 00:47:19.998 00:47:19.998 Executing: test_invalid_db_write_overflow_cq 00:47:19.998 Waiting for AER completion... 00:47:19.998 Failure: test_invalid_db_write_overflow_cq 00:47:19.998 00:47:19.998 00:47:19.998 real 0m10.121s 00:47:19.998 user 0m8.452s 00:47:19.998 sys 0m1.600s 00:47:19.998 ************************************ 00:47:19.998 END TEST nvme_doorbell_aers 00:47:19.998 ************************************ 00:47:19.998 09:13:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:19.998 09:13:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:47:19.998 09:13:54 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:19.998 09:13:54 nvme -- nvme/nvme.sh@97 -- # uname 00:47:19.998 09:13:54 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:47:19.998 09:13:54 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:47:19.998 09:13:54 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:47:19.998 09:13:54 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:19.998 09:13:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:19.998 ************************************ 00:47:19.998 START TEST nvme_multi_aen 00:47:19.998 ************************************ 00:47:19.998 09:13:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:47:20.312 [2024-07-12 09:13:55.267997] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175296) is not found. Dropping the request. 00:47:20.313 [2024-07-12 09:13:55.268185] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175296) is not found. Dropping the request. 00:47:20.313 [2024-07-12 09:13:55.268226] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175296) is not found. Dropping the request. 00:47:20.313 Child process pid: 175505 00:47:20.570 [Child] Asynchronous Event Request test 00:47:20.570 [Child] Attached to 0000:00:10.0 00:47:20.570 [Child] Registering asynchronous event callbacks... 00:47:20.570 [Child] Getting orig temperature thresholds of all controllers 00:47:20.570 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:47:20.570 [Child] Waiting for all controllers to trigger AER and reset threshold 00:47:20.570 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:47:20.570 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:47:20.570 [Child] Cleaning up... 00:47:20.827 Asynchronous Event Request test 00:47:20.827 Attached to 0000:00:10.0 00:47:20.827 Reset controller to setup AER completions for this process 00:47:20.827 Registering asynchronous event callbacks... 00:47:20.827 Getting orig temperature thresholds of all controllers 00:47:20.827 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:47:20.827 Setting all controllers temperature threshold low to trigger AER 00:47:20.827 Waiting for all controllers temperature threshold to be set lower 00:47:20.827 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:47:20.827 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:47:20.827 Waiting for all controllers to trigger AER and reset threshold 00:47:20.827 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:47:20.827 Cleaning up... 00:47:20.827 00:47:20.827 real 0m0.814s 00:47:20.827 user 0m0.384s 00:47:20.827 sys 0m0.245s 00:47:20.827 09:13:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:20.827 ************************************ 00:47:20.827 END TEST nvme_multi_aen 00:47:20.827 ************************************ 00:47:20.827 09:13:55 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:47:20.827 09:13:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:20.827 09:13:55 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:47:20.827 09:13:55 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:47:20.827 09:13:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:20.827 09:13:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:20.827 ************************************ 00:47:20.827 START TEST nvme_startup 00:47:20.827 ************************************ 00:47:20.827 09:13:55 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:47:21.085 Initializing NVMe Controllers 00:47:21.085 Attached to 0000:00:10.0 00:47:21.085 Initialization complete. 00:47:21.085 Time used:200647.594 (us). 00:47:21.085 00:47:21.085 real 0m0.307s 00:47:21.085 user 0m0.086s 00:47:21.085 sys 0m0.159s 00:47:21.085 09:13:56 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:21.085 ************************************ 00:47:21.085 END TEST nvme_startup 00:47:21.085 ************************************ 00:47:21.085 09:13:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:47:21.085 09:13:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:21.085 09:13:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:47:21.085 09:13:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:21.085 09:13:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:21.085 09:13:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:21.085 ************************************ 00:47:21.085 START TEST nvme_multi_secondary 00:47:21.085 ************************************ 00:47:21.085 09:13:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:47:21.085 09:13:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=175581 00:47:21.085 09:13:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:47:21.085 09:13:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=175582 00:47:21.085 09:13:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:47:21.085 09:13:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:47:24.363 Initializing NVMe Controllers 00:47:24.363 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:24.363 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:47:24.363 Initialization complete. Launching workers. 00:47:24.363 ======================================================== 00:47:24.363 Latency(us) 00:47:24.363 Device Information : IOPS MiB/s Average min max 00:47:24.363 PCIE (0000:00:10.0) NSID 1 from core 2: 14176.00 55.38 1127.96 171.99 17455.97 00:47:24.363 ======================================================== 00:47:24.363 Total : 14176.00 55.38 1127.96 171.99 17455.97 00:47:24.363 00:47:24.620 09:13:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 175581 00:47:24.885 Initializing NVMe Controllers 00:47:24.885 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:24.885 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:47:24.885 Initialization complete. Launching workers. 00:47:24.885 ======================================================== 00:47:24.885 Latency(us) 00:47:24.885 Device Information : IOPS MiB/s Average min max 00:47:24.885 PCIE (0000:00:10.0) NSID 1 from core 1: 33000.82 128.91 484.47 162.64 2767.47 00:47:24.885 ======================================================== 00:47:24.885 Total : 33000.82 128.91 484.47 162.64 2767.47 00:47:24.885 00:47:26.793 Initializing NVMe Controllers 00:47:26.793 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:26.793 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:26.793 Initialization complete. Launching workers. 00:47:26.793 ======================================================== 00:47:26.793 Latency(us) 00:47:26.793 Device Information : IOPS MiB/s Average min max 00:47:26.793 PCIE (0000:00:10.0) NSID 1 from core 0: 41704.00 162.91 383.30 113.28 2085.72 00:47:26.793 ======================================================== 00:47:26.793 Total : 41704.00 162.91 383.30 113.28 2085.72 00:47:26.793 00:47:26.793 09:14:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 175582 00:47:26.793 09:14:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=175654 00:47:26.793 09:14:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:47:26.793 09:14:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=175655 00:47:26.793 09:14:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:47:26.794 09:14:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:47:30.073 Initializing NVMe Controllers 00:47:30.073 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:30.073 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:30.073 Initialization complete. Launching workers. 00:47:30.073 ======================================================== 00:47:30.073 Latency(us) 00:47:30.073 Device Information : IOPS MiB/s Average min max 00:47:30.073 PCIE (0000:00:10.0) NSID 1 from core 0: 33956.17 132.64 470.84 101.18 2075.07 00:47:30.073 ======================================================== 00:47:30.073 Total : 33956.17 132.64 470.84 101.18 2075.07 00:47:30.073 00:47:30.358 Initializing NVMe Controllers 00:47:30.358 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:30.358 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:47:30.358 Initialization complete. Launching workers. 00:47:30.358 ======================================================== 00:47:30.358 Latency(us) 00:47:30.358 Device Information : IOPS MiB/s Average min max 00:47:30.358 PCIE (0000:00:10.0) NSID 1 from core 1: 34236.67 133.74 466.95 126.25 1925.50 00:47:30.358 ======================================================== 00:47:30.358 Total : 34236.67 133.74 466.95 126.25 1925.50 00:47:30.358 00:47:32.255 Initializing NVMe Controllers 00:47:32.255 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:32.255 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:47:32.255 Initialization complete. Launching workers. 00:47:32.255 ======================================================== 00:47:32.255 Latency(us) 00:47:32.255 Device Information : IOPS MiB/s Average min max 00:47:32.255 PCIE (0000:00:10.0) NSID 1 from core 2: 17974.66 70.21 889.37 139.59 20667.60 00:47:32.255 ======================================================== 00:47:32.255 Total : 17974.66 70.21 889.37 139.59 20667.60 00:47:32.255 00:47:32.255 09:14:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 175654 00:47:32.255 09:14:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 175655 00:47:32.255 ************************************ 00:47:32.255 END TEST nvme_multi_secondary 00:47:32.255 ************************************ 00:47:32.255 00:47:32.255 real 0m10.885s 00:47:32.255 user 0m18.687s 00:47:32.255 sys 0m0.802s 00:47:32.255 09:14:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:32.255 09:14:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:32.255 09:14:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:47:32.255 09:14:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/174855 ]] 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1088 -- # kill 174855 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1089 -- # wait 174855 00:47:32.255 [2024-07-12 09:14:07.119552] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175504) is not found. Dropping the request. 00:47:32.255 [2024-07-12 09:14:07.119755] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175504) is not found. Dropping the request. 00:47:32.255 [2024-07-12 09:14:07.119810] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175504) is not found. Dropping the request. 00:47:32.255 [2024-07-12 09:14:07.119892] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 175504) is not found. Dropping the request. 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:47:32.255 09:14:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:32.255 09:14:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:32.255 ************************************ 00:47:32.255 START TEST bdev_nvme_reset_stuck_adm_cmd 00:47:32.255 ************************************ 00:47:32.255 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:47:32.513 * Looking for test storage... 00:47:32.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=175815 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 175815 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 175815 ']' 00:47:32.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:32.513 09:14:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:47:32.513 [2024-07-12 09:14:07.634152] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:47:32.513 [2024-07-12 09:14:07.634419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175815 ] 00:47:32.771 [2024-07-12 09:14:07.844859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:33.028 [2024-07-12 09:14:08.094772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:33.028 [2024-07-12 09:14:08.094871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:33.028 [2024-07-12 09:14:08.095015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:47:33.028 [2024-07-12 09:14:08.095031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:47:33.962 nvme0n1 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_jMFMc.txt 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:47:33.962 true 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720775648 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=175843 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:33.962 09:14:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:47:35.862 09:14:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:47:35.862 09:14:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.862 09:14:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:47:35.862 [2024-07-12 09:14:10.984648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:47:35.862 [2024-07-12 09:14:10.985096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:47:35.862 [2024-07-12 09:14:10.985164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:47:35.862 [2024-07-12 09:14:10.985224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:35.862 [2024-07-12 09:14:10.987114] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:47:35.862 09:14:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.862 09:14:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 175843 00:47:35.862 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 175843 00:47:35.862 09:14:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 175843 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:47:35.862 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_jMFMc.txt 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_jMFMc.txt 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 175815 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 175815 ']' 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 175815 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 175815 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 175815' 00:47:36.121 killing process with pid 175815 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 175815 00:47:36.121 09:14:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 175815 00:47:38.686 09:14:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:47:38.686 09:14:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:47:38.686 00:47:38.686 real 0m6.164s 00:47:38.686 user 0m21.350s 00:47:38.686 sys 0m0.583s 00:47:38.686 09:14:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:38.686 ************************************ 00:47:38.686 09:14:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:47:38.686 END TEST bdev_nvme_reset_stuck_adm_cmd 00:47:38.686 ************************************ 00:47:38.686 09:14:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:38.686 09:14:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:47:38.686 09:14:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:47:38.686 09:14:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:38.686 09:14:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:38.686 09:14:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:38.686 ************************************ 00:47:38.686 START TEST nvme_fio 00:47:38.686 ************************************ 00:47:38.686 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:47:38.686 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:47:38.686 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:47:38.687 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:47:38.687 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:47:38.687 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:47:38.687 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:47:38.687 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:38.687 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:38.687 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:47:38.687 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:47:38.687 09:14:13 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:47:38.687 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:47:38.687 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:47:38.687 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:47:38.687 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:47:38.944 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:47:38.944 09:14:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:47:39.202 09:14:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:47:39.202 09:14:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:47:39.202 09:14:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:47:39.461 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:47:39.461 fio-3.35 00:47:39.461 Starting 1 thread 00:47:42.744 00:47:42.744 test: (groupid=0, jobs=1): err= 0: pid=175992: Fri Jul 12 09:14:17 2024 00:47:42.744 read: IOPS=16.1k, BW=62.9MiB/s (65.9MB/s)(126MiB/2001msec) 00:47:42.744 slat (usec): min=4, max=122, avg= 6.54, stdev= 1.90 00:47:42.744 clat (usec): min=333, max=12445, avg=3951.21, stdev=748.48 00:47:42.744 lat (usec): min=339, max=12567, avg=3957.75, stdev=749.39 00:47:42.744 clat percentiles (usec): 00:47:42.744 | 1.00th=[ 2999], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3556], 00:47:42.744 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3884], 00:47:42.744 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4555], 95.00th=[ 5735], 00:47:42.744 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8717], 99.95th=[10552], 00:47:42.744 | 99.99th=[12125] 00:47:42.744 bw ( KiB/s): min=58064, max=69640, per=98.83%, avg=63626.67, stdev=5801.14, samples=3 00:47:42.744 iops : min=14516, max=17410, avg=15906.67, stdev=1450.29, samples=3 00:47:42.744 write: IOPS=16.1k, BW=63.0MiB/s (66.1MB/s)(126MiB/2001msec); 0 zone resets 00:47:42.744 slat (nsec): min=4683, max=56164, avg=6775.38, stdev=1893.77 00:47:42.744 clat (usec): min=246, max=12275, avg=3960.33, stdev=755.77 00:47:42.744 lat (usec): min=252, max=12331, avg=3967.11, stdev=756.66 00:47:42.744 clat percentiles (usec): 00:47:42.744 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3589], 00:47:42.744 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3884], 00:47:42.744 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4555], 95.00th=[ 5735], 00:47:42.744 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8979], 99.95th=[10814], 00:47:42.744 | 99.99th=[11994] 00:47:42.744 bw ( KiB/s): min=58328, max=68944, per=98.16%, avg=63325.33, stdev=5335.20, samples=3 00:47:42.744 iops : min=14582, max=17236, avg=15831.33, stdev=1333.80, samples=3 00:47:42.744 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:47:42.744 lat (msec) : 2=0.16%, 4=72.54%, 10=27.19%, 20=0.07% 00:47:42.744 cpu : usr=99.70%, sys=0.20%, ctx=6, majf=0, minf=37 00:47:42.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:47:42.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:42.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:42.744 issued rwts: total=32205,32272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:42.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:42.744 00:47:42.744 Run status group 0 (all jobs): 00:47:42.744 READ: bw=62.9MiB/s (65.9MB/s), 62.9MiB/s-62.9MiB/s (65.9MB/s-65.9MB/s), io=126MiB (132MB), run=2001-2001msec 00:47:42.744 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=126MiB (132MB), run=2001-2001msec 00:47:42.744 ----------------------------------------------------- 00:47:42.744 Suppressions used: 00:47:42.744 count bytes template 00:47:42.744 1 32 /usr/src/fio/parse.c 00:47:42.744 ----------------------------------------------------- 00:47:42.744 00:47:42.744 09:14:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:47:42.744 09:14:17 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:47:42.744 ************************************ 00:47:42.744 END TEST nvme_fio 00:47:42.744 ************************************ 00:47:42.744 00:47:42.744 real 0m4.279s 00:47:42.744 user 0m3.527s 00:47:42.744 sys 0m0.420s 00:47:42.744 09:14:17 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:42.744 09:14:17 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:47:42.744 09:14:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:47:42.744 00:47:42.744 real 0m47.523s 00:47:42.744 user 2m7.564s 00:47:42.744 sys 0m8.167s 00:47:42.744 09:14:17 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:42.744 09:14:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:47:42.744 ************************************ 00:47:42.744 END TEST nvme 00:47:42.744 ************************************ 00:47:43.019 09:14:17 -- common/autotest_common.sh@1142 -- # return 0 00:47:43.019 09:14:17 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:47:43.019 09:14:17 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:47:43.019 09:14:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:43.019 09:14:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:43.019 09:14:17 -- common/autotest_common.sh@10 -- # set +x 00:47:43.019 ************************************ 00:47:43.019 START TEST nvme_scc 00:47:43.019 ************************************ 00:47:43.019 09:14:17 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:47:43.019 * Looking for test storage... 00:47:43.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:47:43.019 09:14:18 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:43.019 09:14:18 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:43.019 09:14:18 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:43.019 09:14:18 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:43.019 09:14:18 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:43.019 09:14:18 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:43.019 09:14:18 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:43.019 09:14:18 nvme_scc -- paths/export.sh@5 -- # export PATH 00:47:43.019 09:14:18 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:47:43.019 09:14:18 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:47:43.019 09:14:18 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:43.019 09:14:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:47:43.019 09:14:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:47:43.019 09:14:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:47:43.019 09:14:18 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:43.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:43.308 Waiting for block devices as requested 00:47:43.308 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:43.308 09:14:18 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:47:43.308 09:14:18 nvme_scc -- scripts/common.sh@15 -- # local i 00:47:43.308 09:14:18 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:47:43.308 09:14:18 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:47:43.308 09:14:18 nvme_scc -- scripts/common.sh@24 -- # return 0 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:47:43.308 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:47:43.309 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:47:43.310 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.311 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:47:43.580 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:47:43.581 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:47:43.582 09:14:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:47:43.582 09:14:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:47:43.582 09:14:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:47:43.582 09:14:18 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:43.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:43.841 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:47:45.215 09:14:20 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:47:45.215 09:14:20 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:47:45.215 09:14:20 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:45.215 09:14:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:47:45.216 ************************************ 00:47:45.216 START TEST nvme_simple_copy 00:47:45.216 ************************************ 00:47:45.216 09:14:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:47:45.216 Initializing NVMe Controllers 00:47:45.216 Attaching to 0000:00:10.0 00:47:45.216 Controller supports SCC. Attached to 0000:00:10.0 00:47:45.216 Namespace ID: 1 size: 5GB 00:47:45.216 Initialization complete. 00:47:45.216 00:47:45.216 Controller QEMU NVMe Ctrl (12340 ) 00:47:45.216 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:47:45.216 Namespace Block Size:4096 00:47:45.216 Writing LBAs 0 to 63 with Random Data 00:47:45.216 Copied LBAs from 0 - 63 to the Destination LBA 256 00:47:45.216 LBAs matching Written Data: 64 00:47:45.216 ************************************ 00:47:45.216 END TEST nvme_simple_copy 00:47:45.216 ************************************ 00:47:45.216 00:47:45.216 real 0m0.325s 00:47:45.216 user 0m0.119s 00:47:45.216 sys 0m0.107s 00:47:45.216 09:14:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:45.216 09:14:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:47:45.216 09:14:20 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:47:45.216 00:47:45.216 real 0m2.419s 00:47:45.216 user 0m0.630s 00:47:45.216 sys 0m1.656s 00:47:45.216 ************************************ 00:47:45.216 END TEST nvme_scc 00:47:45.216 ************************************ 00:47:45.216 09:14:20 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:45.216 09:14:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:47:45.474 09:14:20 -- common/autotest_common.sh@1142 -- # return 0 00:47:45.474 09:14:20 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:47:45.474 09:14:20 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:47:45.474 09:14:20 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:47:45.474 09:14:20 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:47:45.474 09:14:20 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:47:45.474 09:14:20 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:47:45.474 09:14:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:45.474 09:14:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:45.474 09:14:20 -- common/autotest_common.sh@10 -- # set +x 00:47:45.474 ************************************ 00:47:45.474 START TEST nvme_rpc 00:47:45.474 ************************************ 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:47:45.474 * Looking for test storage... 00:47:45.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:47:45.474 09:14:20 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:45.474 09:14:20 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:47:45.474 09:14:20 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:47:45.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:45.474 09:14:20 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=176505 00:47:45.474 09:14:20 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:47:45.474 09:14:20 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:47:45.474 09:14:20 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 176505 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 176505 ']' 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:45.474 09:14:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:45.732 [2024-07-12 09:14:20.668613] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:47:45.732 [2024-07-12 09:14:20.668994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176505 ] 00:47:45.732 [2024-07-12 09:14:20.842500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:45.992 [2024-07-12 09:14:21.052420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:45.992 [2024-07-12 09:14:21.052415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:46.927 09:14:21 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:46.927 09:14:21 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:47:46.927 09:14:21 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:47:46.927 Nvme0n1 00:47:47.185 09:14:22 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:47:47.185 09:14:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:47:47.185 request: 00:47:47.185 { 00:47:47.185 "bdev_name": "Nvme0n1", 00:47:47.185 "filename": "non_existing_file", 00:47:47.185 "method": "bdev_nvme_apply_firmware", 00:47:47.185 "req_id": 1 00:47:47.185 } 00:47:47.185 Got JSON-RPC error response 00:47:47.185 response: 00:47:47.185 { 00:47:47.185 "code": -32603, 00:47:47.185 "message": "open file failed." 00:47:47.185 } 00:47:47.185 09:14:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:47:47.185 09:14:22 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:47:47.185 09:14:22 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:47:47.443 09:14:22 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:47:47.443 09:14:22 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 176505 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 176505 ']' 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 176505 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176505 00:47:47.443 killing process with pid 176505 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176505' 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@967 -- # kill 176505 00:47:47.443 09:14:22 nvme_rpc -- common/autotest_common.sh@972 -- # wait 176505 00:47:50.026 ************************************ 00:47:50.026 END TEST nvme_rpc 00:47:50.026 ************************************ 00:47:50.026 00:47:50.026 real 0m4.198s 00:47:50.026 user 0m8.000s 00:47:50.026 sys 0m0.576s 00:47:50.026 09:14:24 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:50.026 09:14:24 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:47:50.026 09:14:24 -- common/autotest_common.sh@1142 -- # return 0 00:47:50.026 09:14:24 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:47:50.026 09:14:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:50.026 09:14:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:50.026 09:14:24 -- common/autotest_common.sh@10 -- # set +x 00:47:50.026 ************************************ 00:47:50.026 START TEST nvme_rpc_timeouts 00:47:50.026 ************************************ 00:47:50.026 09:14:24 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:47:50.026 * Looking for test storage... 00:47:50.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:47:50.026 09:14:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:50.026 09:14:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_176578 00:47:50.026 09:14:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_176578 00:47:50.026 09:14:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=176604 00:47:50.026 09:14:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:47:50.026 09:14:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:47:50.026 09:14:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 176604 00:47:50.026 09:14:24 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 176604 ']' 00:47:50.026 09:14:24 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:50.026 09:14:24 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:50.026 09:14:24 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:50.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:50.026 09:14:24 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:50.026 09:14:24 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:47:50.026 [2024-07-12 09:14:24.837055] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:47:50.026 [2024-07-12 09:14:24.837945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176604 ] 00:47:50.026 [2024-07-12 09:14:25.007290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:50.284 [2024-07-12 09:14:25.226647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:50.284 [2024-07-12 09:14:25.226641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:50.849 09:14:26 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:50.849 Checking default timeout settings: 00:47:50.849 09:14:26 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:47:50.849 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:47:50.849 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:47:51.415 Making settings changes with rpc: 00:47:51.415 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:47:51.415 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:47:51.673 Check default vs. modified settings: 00:47:51.673 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:47:51.673 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:47:51.931 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:47:51.931 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:47:51.931 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_176578 00:47:51.931 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:47:51.931 09:14:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_176578 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:47:51.931 Setting action_on_timeout is changed as expected. 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_176578 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_176578 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:47:51.931 Setting timeout_us is changed as expected. 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_176578 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_176578 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:47:51.931 Setting timeout_admin_us is changed as expected. 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_176578 /tmp/settings_modified_176578 00:47:51.931 09:14:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 176604 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 176604 ']' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 176604 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176604 00:47:51.931 killing process with pid 176604 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176604' 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 176604 00:47:51.931 09:14:27 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 176604 00:47:54.460 RPC TIMEOUT SETTING TEST PASSED. 00:47:54.460 09:14:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:47:54.460 ************************************ 00:47:54.460 END TEST nvme_rpc_timeouts 00:47:54.460 ************************************ 00:47:54.460 00:47:54.460 real 0m4.561s 00:47:54.460 user 0m8.846s 00:47:54.460 sys 0m0.642s 00:47:54.460 09:14:29 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:54.460 09:14:29 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:47:54.460 09:14:29 -- common/autotest_common.sh@1142 -- # return 0 00:47:54.460 09:14:29 -- spdk/autotest.sh@243 -- # uname -s 00:47:54.460 09:14:29 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:47:54.460 09:14:29 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:47:54.460 09:14:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:47:54.460 09:14:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:54.460 09:14:29 -- common/autotest_common.sh@10 -- # set +x 00:47:54.460 ************************************ 00:47:54.460 START TEST sw_hotplug 00:47:54.460 ************************************ 00:47:54.460 09:14:29 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:47:54.460 * Looking for test storage... 00:47:54.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:47:54.460 09:14:29 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:54.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:54.717 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:47:55.675 09:14:30 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:47:55.675 09:14:30 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:47:55.675 09:14:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:47:55.675 09:14:30 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@230 -- # local class 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:47:55.675 09:14:30 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@15 -- # local i 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:47:55.676 09:14:30 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:47:55.676 09:14:30 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:47:55.676 09:14:30 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:47:55.676 09:14:30 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:55.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:55.934 Waiting for block devices as requested 00:47:55.934 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:56.192 09:14:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:47:56.192 09:14:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:56.450 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:47:56.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:56.450 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:47:57.822 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:47:57.822 09:14:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:57.822 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=177199 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:47:57.823 09:14:32 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:47:57.823 09:14:32 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:47:57.823 09:14:32 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:47:57.823 09:14:32 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:47:57.823 09:14:32 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:47:57.823 09:14:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:47:57.823 Initializing NVMe Controllers 00:47:57.823 Attaching to 0000:00:10.0 00:47:57.823 Attached to 0000:00:10.0 00:47:57.823 Initialization complete. Starting I/O... 00:47:57.823 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:47:57.823 00:47:59.197 QEMU NVMe Ctrl (12340 ): 2169 I/Os completed (+2169) 00:47:59.197 00:47:59.764 QEMU NVMe Ctrl (12340 ): 4962 I/Os completed (+2793) 00:47:59.764 00:48:01.137 QEMU NVMe Ctrl (12340 ): 7993 I/Os completed (+3031) 00:48:01.137 00:48:02.073 QEMU NVMe Ctrl (12340 ): 11051 I/Os completed (+3058) 00:48:02.073 00:48:03.007 QEMU NVMe Ctrl (12340 ): 14136 I/Os completed (+3085) 00:48:03.007 00:48:03.574 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:03.574 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:03.574 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:03.575 [2024-07-12 09:14:38.747081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:48:03.575 Controller removed: QEMU NVMe Ctrl (12340 ) 00:48:03.575 [2024-07-12 09:14:38.748448] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.575 [2024-07-12 09:14:38.748514] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.575 [2024-07-12 09:14:38.748542] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.575 [2024-07-12 09:14:38.748564] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.575 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:48:03.575 [2024-07-12 09:14:38.754028] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.575 [2024-07-12 09:14:38.754081] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.575 [2024-07-12 09:14:38.754105] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.575 [2024-07-12 09:14:38.754140] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:03.833 09:14:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:48:03.833 Attaching to 0000:00:10.0 00:48:03.833 Attached to 0000:00:10.0 00:48:03.833 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:48:03.833 00:48:04.768 QEMU NVMe Ctrl (12340 ): 3044 I/Os completed (+3044) 00:48:04.768 00:48:06.145 QEMU NVMe Ctrl (12340 ): 6089 I/Os completed (+3045) 00:48:06.145 00:48:07.086 QEMU NVMe Ctrl (12340 ): 9068 I/Os completed (+2979) 00:48:07.086 00:48:08.018 QEMU NVMe Ctrl (12340 ): 12066 I/Os completed (+2998) 00:48:08.018 00:48:08.951 QEMU NVMe Ctrl (12340 ): 14971 I/Os completed (+2905) 00:48:08.951 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:09.885 [2024-07-12 09:14:44.949326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:48:09.885 Controller removed: QEMU NVMe Ctrl (12340 ) 00:48:09.885 QEMU NVMe Ctrl (12340 ): 17907 I/Os completed (+2936) 00:48:09.885 00:48:09.885 [2024-07-12 09:14:44.950640] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 [2024-07-12 09:14:44.950688] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 [2024-07-12 09:14:44.950714] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 [2024-07-12 09:14:44.950736] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:48:09.885 [2024-07-12 09:14:44.956154] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 [2024-07-12 09:14:44.956211] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 [2024-07-12 09:14:44.956248] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 [2024-07-12 09:14:44.956267] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:09.885 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:48:09.885 EAL: Scan for (pci) bus failed. 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:09.885 09:14:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:09.885 09:14:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:10.144 09:14:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:10.144 09:14:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:10.144 09:14:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:48:10.144 Attaching to 0000:00:10.0 00:48:10.144 Attached to 0000:00:10.0 00:48:11.079 QEMU NVMe Ctrl (12340 ): 2333 I/Os completed (+2333) 00:48:11.079 00:48:12.014 QEMU NVMe Ctrl (12340 ): 5192 I/Os completed (+2859) 00:48:12.014 00:48:13.021 QEMU NVMe Ctrl (12340 ): 8060 I/Os completed (+2868) 00:48:13.021 00:48:13.957 QEMU NVMe Ctrl (12340 ): 10968 I/Os completed (+2908) 00:48:13.957 00:48:14.892 QEMU NVMe Ctrl (12340 ): 13856 I/Os completed (+2888) 00:48:14.892 00:48:15.826 QEMU NVMe Ctrl (12340 ): 16797 I/Os completed (+2941) 00:48:15.826 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:16.084 [2024-07-12 09:14:51.126932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:48:16.084 Controller removed: QEMU NVMe Ctrl (12340 ) 00:48:16.084 [2024-07-12 09:14:51.128226] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 [2024-07-12 09:14:51.128301] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 [2024-07-12 09:14:51.128329] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 [2024-07-12 09:14:51.128352] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:48:16.084 [2024-07-12 09:14:51.133718] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 [2024-07-12 09:14:51.133772] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 [2024-07-12 09:14:51.133794] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 [2024-07-12 09:14:51.133814] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:16.084 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:16.342 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:16.342 09:14:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:48:16.342 Attaching to 0000:00:10.0 00:48:16.342 Attached to 0000:00:10.0 00:48:16.342 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:48:16.342 [2024-07-12 09:14:51.308575] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:48:22.898 09:14:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:48:22.898 09:14:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:22.898 09:14:57 sw_hotplug -- common/autotest_common.sh@715 -- # time=24.56 00:48:22.898 09:14:57 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.56 00:48:22.898 09:14:57 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:48:22.898 09:14:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.56 00:48:22.898 09:14:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.56 1 00:48:22.898 remove_attach_helper took 24.56s to complete (handling 1 nvme drive(s)) 09:14:57 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 177199 00:48:28.166 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (177199) - No such process 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 177199 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=177572 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:28.166 09:15:03 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 177572 00:48:28.166 09:15:03 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 177572 ']' 00:48:28.166 09:15:03 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:28.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:28.166 09:15:03 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:48:28.166 09:15:03 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:28.166 09:15:03 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:48:28.166 09:15:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:28.424 [2024-07-12 09:15:03.378233] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:48:28.424 [2024-07-12 09:15:03.378413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177572 ] 00:48:28.424 [2024-07-12 09:15:03.537622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:28.683 [2024-07-12 09:15:03.765119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:48:29.618 09:15:04 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:48:29.618 09:15:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:36.177 09:15:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:36.177 09:15:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:36.177 09:15:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:36.177 09:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:36.177 09:15:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:36.177 09:15:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:36.177 09:15:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:36.177 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:36.743 09:15:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:36.743 09:15:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:36.743 09:15:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:36.743 09:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:37.310 09:15:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:37.310 09:15:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:37.310 09:15:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:37.310 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:37.875 09:15:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:37.875 09:15:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:37.875 09:15:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:37.875 09:15:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:38.442 09:15:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:38.442 09:15:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:38.442 09:15:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:38.442 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:39.006 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:39.006 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:39.006 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:39.006 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:39.006 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:39.006 09:15:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:39.006 09:15:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:39.006 09:15:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:39.006 09:15:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:39.006 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:39.006 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:39.571 09:15:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:39.571 09:15:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:39.571 09:15:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:39.571 09:15:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:40.136 09:15:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:40.136 09:15:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:40.136 09:15:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:40.136 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:40.700 09:15:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:40.700 09:15:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:40.700 09:15:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:40.700 09:15:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:41.265 09:15:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:41.265 09:15:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:41.265 09:15:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:41.265 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:41.831 09:15:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:41.831 09:15:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:41.831 09:15:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:41.831 09:15:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:42.397 09:15:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:42.397 09:15:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:42.397 09:15:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:42.397 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:42.962 09:15:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:42.962 09:15:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:42.962 09:15:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:42.962 09:15:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:43.220 [2024-07-12 09:15:18.342500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:48:43.220 [2024-07-12 09:15:18.344870] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.220 [2024-07-12 09:15:18.344969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.220 [2024-07-12 09:15:18.344999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.220 [2024-07-12 09:15:18.345038] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.220 [2024-07-12 09:15:18.345061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.220 [2024-07-12 09:15:18.345086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.220 [2024-07-12 09:15:18.345127] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.220 [2024-07-12 09:15:18.345192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.220 [2024-07-12 09:15:18.345222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.220 [2024-07-12 09:15:18.345251] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:43.220 [2024-07-12 09:15:18.345275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:43.220 [2024-07-12 09:15:18.345300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:43.477 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:43.477 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:43.477 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:43.477 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:43.477 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:43.478 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:43.478 09:15:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:43.478 09:15:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:43.478 09:15:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:43.478 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:48:43.478 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:43.478 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:43.478 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:43.478 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:43.478 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:43.736 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:43.736 09:15:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:50.314 09:15:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:50.314 09:15:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:50.314 09:15:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:50.314 09:15:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:50.314 09:15:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:50.314 09:15:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:50.314 09:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:50.314 09:15:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:50.314 09:15:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:50.314 09:15:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:50.314 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:50.880 09:15:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:50.880 09:15:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:50.880 09:15:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:50.880 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:51.446 09:15:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:51.446 09:15:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:51.446 09:15:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:51.446 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:52.014 09:15:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:52.014 09:15:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:52.014 09:15:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:52.014 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:52.579 09:15:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:52.579 09:15:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:52.579 09:15:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:52.579 09:15:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:53.144 09:15:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:53.144 09:15:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:53.144 09:15:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:53.144 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:53.709 09:15:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:53.709 09:15:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:53.709 09:15:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:53.709 09:15:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:54.275 09:15:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:54.275 09:15:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:54.275 09:15:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:54.275 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:54.842 09:15:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:54.842 09:15:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:54.842 09:15:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:54.842 09:15:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:55.409 09:15:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:55.409 09:15:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:55.409 09:15:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:55.409 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:56.010 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:56.010 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:56.010 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:56.010 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:56.010 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:56.010 09:15:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:56.010 09:15:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:56.010 09:15:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:56.010 09:15:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:56.010 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:56.010 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:56.576 09:15:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:56.576 09:15:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:56.576 09:15:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:48:56.576 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:48:56.834 [2024-07-12 09:15:31.942675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:48:56.834 [2024-07-12 09:15:31.944874] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:56.834 [2024-07-12 09:15:31.944960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.834 [2024-07-12 09:15:31.944989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:56.834 [2024-07-12 09:15:31.945028] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:56.834 [2024-07-12 09:15:31.945059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.834 [2024-07-12 09:15:31.945104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:56.834 [2024-07-12 09:15:31.945127] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:56.834 [2024-07-12 09:15:31.945172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.834 [2024-07-12 09:15:31.945193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:56.834 [2024-07-12 09:15:31.945221] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:48:56.834 [2024-07-12 09:15:31.945263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:48:56.834 [2024-07-12 09:15:31.945304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:48:57.093 09:15:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:48:57.093 09:15:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:48:57.093 09:15:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:48:57.093 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:48:57.351 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:48:57.351 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:48:57.351 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:03.910 09:15:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:03.910 09:15:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:03.910 09:15:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:03.910 09:15:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:03.910 09:15:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:03.910 09:15:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:03.910 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:03.910 09:15:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:03.910 09:15:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:03.910 09:15:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:03.910 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:04.476 09:15:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:04.476 09:15:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:04.476 09:15:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:04.476 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:05.042 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:05.042 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:05.042 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:05.042 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:05.042 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:05.042 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:05.043 09:15:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.043 09:15:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:05.043 09:15:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.043 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:05.043 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:05.609 09:15:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:05.609 09:15:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:05.609 09:15:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:05.609 09:15:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:06.175 09:15:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:06.175 09:15:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:06.175 09:15:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:06.175 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:06.742 09:15:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:06.742 09:15:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:06.742 09:15:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:06.742 09:15:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:07.309 09:15:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.309 09:15:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:07.309 09:15:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:07.309 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:07.878 09:15:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:07.878 09:15:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:07.878 09:15:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:07.878 09:15:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:08.445 09:15:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:08.445 09:15:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:08.445 09:15:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:08.445 09:15:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:09.010 09:15:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:09.010 09:15:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:09.010 09:15:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:09.010 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:09.576 09:15:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:09.576 09:15:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:09.576 09:15:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:09.576 09:15:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:10.140 09:15:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:10.140 09:15:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:10.140 09:15:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:10.140 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:10.702 09:15:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:10.702 09:15:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:10.702 09:15:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:10.702 09:15:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:11.267 09:15:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:11.267 09:15:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:11.267 09:15:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:11.267 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:11.832 09:15:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:11.832 09:15:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:11.832 09:15:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:11.832 09:15:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:12.397 09:15:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:12.397 09:15:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:12.397 09:15:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:12.397 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:12.962 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:12.962 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:12.962 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:12.962 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:12.962 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:12.962 09:15:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:12.962 09:15:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:12.962 09:15:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:12.962 09:15:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:12.962 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:12.962 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:13.527 09:15:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:13.527 09:15:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:13.527 09:15:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:13.527 09:15:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:14.091 09:15:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:14.091 09:15:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:14.091 09:15:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:14.091 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:14.655 09:15:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:14.655 09:15:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:14.655 09:15:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:14.655 09:15:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:15.219 09:15:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:15.219 09:15:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:15.219 09:15:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:15.219 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:15.784 09:15:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:15.784 09:15:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:15.784 09:15:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:15.784 09:15:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:16.349 09:15:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:16.349 09:15:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:16.349 09:15:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:16.349 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:16.916 09:15:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:16.916 09:15:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:16.916 09:15:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:16.916 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:17.481 09:15:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:17.481 09:15:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:17.481 09:15:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:17.481 09:15:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:18.042 09:15:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:18.042 09:15:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:18.042 09:15:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:18.042 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:18.607 09:15:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:18.607 09:15:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:18.607 09:15:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:18.607 09:15:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:19.172 09:15:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:19.172 09:15:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:19.172 09:15:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:19.172 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:19.737 09:15:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:19.737 09:15:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:19.737 09:15:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:19.737 09:15:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:20.302 09:15:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:20.302 09:15:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:20.302 09:15:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:20.302 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:20.868 09:15:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:20.868 09:15:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:20.868 09:15:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:20.868 09:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:21.433 09:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:21.433 09:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:21.433 09:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:21.433 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:21.691 [2024-07-12 09:15:56.743007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:21.691 [2024-07-12 09:15:56.745109] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:21.691 [2024-07-12 09:15:56.745187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.691 [2024-07-12 09:15:56.745236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.691 [2024-07-12 09:15:56.745269] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:21.691 [2024-07-12 09:15:56.745312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.691 [2024-07-12 09:15:56.745338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.691 [2024-07-12 09:15:56.745381] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:21.691 [2024-07-12 09:15:56.745423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.691 [2024-07-12 09:15:56.745445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.691 [2024-07-12 09:15:56.745471] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:21.691 [2024-07-12 09:15:56.745503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:49:21.691 [2024-07-12 09:15:56.745529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:21.948 09:15:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:21.948 09:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:21.948 09:15:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:49:21.948 09:15:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:21.948 09:15:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:21.948 09:15:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:21.948 09:15:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:21.948 09:15:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:21.948 09:15:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:21.948 09:15:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@715 -- # time=58.64 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@716 -- # echo 58.64 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=58.64 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 58.64 1 00:49:28.528 remove_attach_helper took 58.64s to complete (handling 1 nvme drive(s)) 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:49:28.528 09:16:03 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:49:28.528 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:35.097 09:16:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.097 09:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:35.097 09:16:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:35.097 09:16:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.097 09:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:35.097 09:16:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:35.097 09:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:35.357 09:16:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.357 09:16:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:35.357 09:16:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:35.357 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:35.925 09:16:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.925 09:16:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:35.925 09:16:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:35.925 09:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:36.492 09:16:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.492 09:16:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:36.492 09:16:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:36.492 09:16:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:37.059 09:16:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:37.059 09:16:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:37.059 09:16:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:37.059 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:37.625 09:16:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:37.625 09:16:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:37.625 09:16:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:37.625 09:16:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:38.191 09:16:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:38.191 09:16:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:38.191 09:16:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:38.191 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:38.756 09:16:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:38.756 09:16:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:38.756 09:16:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:38.756 09:16:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:39.322 09:16:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:39.322 09:16:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:39.322 09:16:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:39.322 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:39.890 09:16:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:39.890 09:16:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:39.890 09:16:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:39.890 09:16:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:40.457 09:16:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:40.457 09:16:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:40.457 09:16:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:40.457 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:41.024 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:41.025 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:41.025 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:41.025 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:41.025 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:41.025 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:41.025 09:16:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:41.025 09:16:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:41.025 09:16:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:41.025 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:41.025 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:41.652 09:16:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:41.652 09:16:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:41.652 09:16:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:41.652 09:16:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:42.220 09:16:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:42.220 09:16:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:42.220 09:16:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:42.220 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:42.478 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:42.478 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:42.478 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:42.478 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:42.478 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:42.478 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:42.737 09:16:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:42.737 09:16:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:42.737 09:16:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:42.737 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:42.737 09:16:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:43.304 09:16:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:43.304 09:16:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:43.304 09:16:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:43.304 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:43.871 09:16:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:43.871 09:16:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:43.871 09:16:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:43.871 09:16:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:44.438 09:16:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:44.438 09:16:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:44.438 09:16:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:44.438 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:45.004 09:16:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:45.004 09:16:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:45.004 09:16:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:45.004 09:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:45.571 09:16:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:45.571 09:16:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:45.571 09:16:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:45.571 09:16:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:46.142 09:16:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:46.142 09:16:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:46.142 09:16:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:46.142 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:46.427 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:46.427 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:46.427 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:46.427 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:46.427 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:46.427 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:46.427 09:16:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:46.427 09:16:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:46.686 09:16:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:46.686 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:46.686 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:47.251 09:16:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:47.251 09:16:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:47.251 09:16:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:47.251 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:47.817 09:16:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:47.817 09:16:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:47.817 09:16:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:47.817 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:48.380 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:48.380 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:48.380 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:48.380 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:48.380 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:48.380 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:48.381 09:16:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:48.381 09:16:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:48.381 09:16:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:48.381 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:48.381 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:48.946 09:16:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:48.946 09:16:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:48.946 09:16:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:48.946 09:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:49.511 09:16:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:49.511 09:16:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:49.511 09:16:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:49.511 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:50.077 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:50.077 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:50.077 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:50.077 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:50.077 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:50.077 09:16:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:50.077 09:16:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:50.077 09:16:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:50.077 09:16:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:50.077 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:50.077 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:50.643 09:16:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:50.643 09:16:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:50.643 09:16:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:50.643 09:16:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:51.209 09:16:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:51.209 09:16:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:51.209 09:16:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:51.209 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:51.776 09:16:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:51.776 09:16:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:51.776 09:16:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:51.776 09:16:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:52.342 09:16:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:52.342 09:16:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:52.342 09:16:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:52.342 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:52.910 09:16:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:52.910 09:16:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:52.910 09:16:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:52.910 09:16:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:53.477 09:16:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:53.477 09:16:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:53.477 09:16:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:53.477 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:54.041 09:16:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.041 09:16:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:54.041 09:16:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:54.041 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:54.609 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:54.609 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:54.609 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:54.609 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:54.609 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:54.609 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:54.609 09:16:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.609 09:16:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:54.609 09:16:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:54.610 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:54.610 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:54.867 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:54.867 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:54.867 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:54.867 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:54.867 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:54.867 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:54.867 09:16:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:54.867 09:16:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:55.125 09:16:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:55.125 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:55.125 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:55.691 09:16:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:55.691 09:16:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:55.691 09:16:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:55.691 09:16:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:56.258 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:56.258 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:56.258 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:56.258 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:56.258 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:56.258 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:56.259 09:16:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:56.259 09:16:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:56.259 09:16:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:56.259 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:56.259 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:56.826 09:16:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:56.826 09:16:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:56.826 09:16:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:56.826 09:16:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:57.398 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:57.398 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:57.398 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:57.398 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:57.399 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:57.399 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:57.399 09:16:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:57.399 09:16:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:57.399 09:16:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:57.399 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:57.399 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:57.672 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:57.672 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:57.672 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:57.672 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:57.672 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:57.672 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:57.672 09:16:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:57.672 09:16:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:57.930 09:16:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:57.930 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:49:57.930 09:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:49:57.930 [2024-07-12 09:16:33.006686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:49:57.930 [2024-07-12 09:16:33.008389] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:57.930 [2024-07-12 09:16:33.008449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:49:57.930 [2024-07-12 09:16:33.008476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:57.930 [2024-07-12 09:16:33.008508] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:57.930 [2024-07-12 09:16:33.008529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:49:57.930 [2024-07-12 09:16:33.008546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:57.930 [2024-07-12 09:16:33.008567] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:57.930 [2024-07-12 09:16:33.008600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:49:57.930 [2024-07-12 09:16:33.008623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:57.930 [2024-07-12 09:16:33.008653] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:49:57.930 [2024-07-12 09:16:33.008674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:49:57.930 [2024-07-12 09:16:33.008691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:49:58.496 09:16:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:58.496 09:16:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:49:58.496 09:16:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:49:58.496 09:16:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:05.068 09:16:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:05.068 09:16:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:05.068 09:16:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:05.068 09:16:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:05.068 09:16:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:05.068 09:16:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:05.068 09:16:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:05.326 09:16:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:05.326 09:16:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:05.326 09:16:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:05.326 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:05.891 09:16:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:05.891 09:16:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:05.891 09:16:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:05.891 09:16:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:06.457 09:16:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:06.457 09:16:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:06.457 09:16:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:06.457 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:07.023 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:07.023 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:07.023 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:07.023 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:07.023 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:07.023 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:07.023 09:16:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:07.023 09:16:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:07.023 09:16:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:07.023 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:07.023 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:07.589 09:16:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:07.589 09:16:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:07.589 09:16:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:07.589 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:08.250 09:16:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:08.250 09:16:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:08.250 09:16:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:08.250 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:08.507 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:08.507 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:08.507 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:08.507 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:08.507 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:08.507 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:08.507 09:16:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:08.507 09:16:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:08.507 09:16:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:08.765 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:08.765 09:16:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:09.023 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:09.023 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:09.281 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:09.281 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:09.281 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:09.281 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:09.281 09:16:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:09.281 09:16:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:09.281 09:16:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:09.281 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:09.281 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:09.846 09:16:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:09.846 09:16:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:09.846 09:16:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:09.846 09:16:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:10.413 09:16:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:10.413 09:16:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:10.413 09:16:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:10.413 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:10.980 09:16:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:10.980 09:16:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:10.980 09:16:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:10.980 09:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:11.546 09:16:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:11.546 09:16:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:11.546 09:16:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:11.546 09:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:12.111 09:16:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:12.111 09:16:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:12.111 09:16:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:12.111 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:12.693 09:16:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:12.693 09:16:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:12.693 09:16:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:12.693 09:16:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:12.951 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:12.951 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:12.951 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:12.951 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:12.951 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:12.951 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:12.951 09:16:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:12.951 09:16:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:13.209 09:16:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:13.209 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:13.209 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:13.773 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:13.774 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:13.774 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:13.774 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:13.774 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:13.774 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:13.774 09:16:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:13.774 09:16:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:13.774 09:16:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:13.774 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:13.774 09:16:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:14.339 09:16:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.339 09:16:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:14.339 09:16:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:14.339 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:14.905 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:14.906 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:14.906 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:14.906 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:14.906 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:14.906 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:14.906 09:16:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.906 09:16:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:14.906 09:16:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.906 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:14.906 09:16:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:15.246 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:15.246 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:15.246 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:15.246 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:15.246 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:15.246 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:15.246 09:16:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:15.246 09:16:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:15.246 09:16:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:15.519 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:15.519 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:15.778 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:15.778 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:15.778 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:15.778 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:15.778 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:15.778 09:16:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:15.778 09:16:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:15.778 09:16:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:15.778 09:16:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:16.036 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:16.037 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:16.603 09:16:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:16.603 09:16:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:16.603 09:16:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:16.603 09:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:17.169 09:16:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:17.169 09:16:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:17.169 09:16:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:17.169 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:17.736 09:16:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:17.736 09:16:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:17.736 09:16:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:17.736 09:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:18.305 09:16:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:18.305 09:16:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:18.305 09:16:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:18.305 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:18.872 09:16:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:18.872 09:16:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:18.872 09:16:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:18.872 09:16:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:19.439 09:16:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:19.439 09:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:19.439 09:16:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:19.439 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:19.698 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:19.698 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:19.698 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:19.698 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:19.698 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:19.698 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:19.698 09:16:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:19.698 09:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:19.956 09:16:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:19.956 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:19.956 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:20.523 09:16:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.523 09:16:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:20.523 09:16:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:20.523 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:21.119 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:21.119 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:21.119 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:21.119 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:21.119 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:21.119 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:21.119 09:16:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:21.119 09:16:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:21.119 09:16:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:21.119 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:21.120 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:21.388 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:21.388 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:21.388 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:21.388 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:21.388 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:21.388 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:21.388 09:16:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:21.388 09:16:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:21.647 09:16:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:21.647 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:21.647 09:16:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:22.214 09:16:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:22.214 09:16:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:22.214 09:16:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:22.214 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:22.781 09:16:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:22.781 09:16:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:22.781 09:16:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:22.781 09:16:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:23.349 09:16:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:23.349 09:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:23.349 09:16:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:23.349 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:23.915 09:16:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:23.915 09:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:23.915 09:16:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:23.915 09:16:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:24.478 09:16:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:24.478 09:16:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:24.478 09:16:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:24.478 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:25.042 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:25.042 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:25.042 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:25.042 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:25.042 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:25.042 09:16:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:25.042 09:16:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:25.042 09:16:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:25.042 09:16:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:25.042 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:25.042 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:25.607 09:17:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:25.607 09:17:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:25.607 09:17:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:25.607 09:17:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:26.173 09:17:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:26.173 09:17:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:26.173 09:17:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:26.173 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:26.431 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:26.431 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:26.689 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:26.689 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:26.689 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:26.689 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:26.689 09:17:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:26.689 09:17:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:26.689 09:17:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:26.689 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:26.689 09:17:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:27.255 09:17:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:27.255 09:17:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:27.255 09:17:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:27.255 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:27.856 09:17:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:27.856 09:17:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:27.856 09:17:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:27.856 09:17:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:28.421 09:17:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:28.421 09:17:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:28.421 09:17:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:28.421 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:28.987 09:17:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:28.987 09:17:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:28.987 09:17:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:28.987 09:17:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:29.553 09:17:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:29.553 09:17:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:29.553 09:17:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:29.553 09:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:30.120 09:17:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:30.120 09:17:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:30.120 09:17:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:30.120 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:30.687 09:17:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:30.687 09:17:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:30.687 09:17:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:30.687 09:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:31.255 09:17:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:31.255 09:17:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:31.255 09:17:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:31.255 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:31.822 09:17:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:31.822 09:17:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:31.822 09:17:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:31.822 09:17:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:32.389 09:17:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.389 09:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:32.389 09:17:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:32.389 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:32.649 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:32.649 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:32.649 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:32.649 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:32.649 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:32.649 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:32.649 09:17:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.649 09:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:32.908 09:17:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.908 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:32.908 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:33.475 09:17:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:33.475 09:17:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:33.475 09:17:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:33.475 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:34.083 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:34.083 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:34.083 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:34.083 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:34.083 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:34.083 09:17:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:34.083 09:17:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.083 09:17:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:34.083 09:17:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.083 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:34.083 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:34.342 [2024-07-12 09:17:09.407282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:50:34.342 [2024-07-12 09:17:09.408938] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:34.342 [2024-07-12 09:17:09.409012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:50:34.342 [2024-07-12 09:17:09.409053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:34.342 [2024-07-12 09:17:09.409084] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:34.342 [2024-07-12 09:17:09.409102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:50:34.342 [2024-07-12 09:17:09.409137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:34.342 [2024-07-12 09:17:09.409156] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:34.342 [2024-07-12 09:17:09.409175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:50:34.342 [2024-07-12 09:17:09.409191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:34.342 [2024-07-12 09:17:09.409208] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:50:34.342 [2024-07-12 09:17:09.409248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:50:34.342 [2024-07-12 09:17:09.409290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:34.342 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:34.342 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:34.342 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:34.342 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:34.342 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:34.342 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:34.342 09:17:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.342 09:17:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:34.601 09:17:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:50:34.601 09:17:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:41.160 09:17:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.160 09:17:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:41.160 09:17:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:41.160 09:17:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.160 09:17:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:41.160 09:17:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:41.160 09:17:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:41.418 09:17:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.418 09:17:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:41.418 09:17:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:41.418 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:41.983 09:17:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.983 09:17:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:41.983 09:17:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:41.983 09:17:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:42.550 09:17:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.550 09:17:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:42.550 09:17:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:42.550 09:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:43.115 09:17:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:43.115 09:17:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:43.115 09:17:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:43.115 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:43.681 09:17:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:43.681 09:17:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:43.681 09:17:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:43.681 09:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:44.247 09:17:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.247 09:17:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:44.247 09:17:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:44.247 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:44.813 09:17:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.813 09:17:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:44.813 09:17:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:44.813 09:17:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:45.378 09:17:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.378 09:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:45.378 09:17:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:45.378 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:45.943 09:17:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.943 09:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:45.943 09:17:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:45.943 09:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:46.508 09:17:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.508 09:17:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:46.508 09:17:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:46.508 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:47.074 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:47.074 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:47.074 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:47.074 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:47.074 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:47.074 09:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:47.074 09:17:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.074 09:17:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:47.074 09:17:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.074 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:47.074 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:47.640 09:17:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.640 09:17:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:47.640 09:17:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:47.640 09:17:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:48.207 09:17:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.207 09:17:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:48.207 09:17:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:48.207 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:48.772 09:17:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.772 09:17:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:48.772 09:17:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:48.772 09:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:49.030 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:49.030 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:49.030 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:49.030 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:49.030 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:49.030 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:49.030 09:17:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.030 09:17:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:49.288 09:17:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.288 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:49.288 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:49.854 09:17:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.854 09:17:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:49.854 09:17:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:49.854 09:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:50.427 09:17:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.427 09:17:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:50.427 09:17:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:50.427 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:50.994 09:17:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.994 09:17:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:50.994 09:17:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:50.994 09:17:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:51.560 09:17:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.560 09:17:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:51.560 09:17:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:51.560 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:52.126 09:17:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.126 09:17:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:52.126 09:17:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:52.126 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:52.692 09:17:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.692 09:17:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:52.692 09:17:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:52.692 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:52.950 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:52.950 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:52.950 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:52.950 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:52.950 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:52.950 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:52.950 09:17:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.950 09:17:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:53.208 09:17:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.208 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:53.208 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:53.775 09:17:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.775 09:17:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:53.775 09:17:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:53.775 09:17:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:54.340 09:17:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.340 09:17:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:54.340 09:17:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:54.340 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:54.905 09:17:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.905 09:17:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:54.905 09:17:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:54.905 09:17:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:55.226 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:55.226 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:55.226 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:55.226 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:55.226 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:55.226 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:55.226 09:17:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.226 09:17:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:55.226 09:17:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.484 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:55.484 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:56.051 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:56.051 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:56.051 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:56.051 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:56.051 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:56.051 09:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:56.051 09:17:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.051 09:17:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:56.051 09:17:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.051 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:56.051 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:56.617 09:17:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.617 09:17:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:56.617 09:17:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:56.617 09:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:57.182 09:17:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.182 09:17:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:57.182 09:17:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:57.182 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:57.746 09:17:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.746 09:17:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:57.746 09:17:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:57.746 09:17:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:58.311 09:17:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.311 09:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:58.311 09:17:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:58.311 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:58.878 09:17:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.878 09:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:58.878 09:17:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:58.878 09:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:50:59.444 09:17:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.444 09:17:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:50:59.444 09:17:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:50:59.444 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:00.008 09:17:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.008 09:17:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:00.008 09:17:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:00.008 09:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:00.573 09:17:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.573 09:17:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:00.573 09:17:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:00.573 09:17:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:01.148 09:17:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:01.148 09:17:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:01.148 09:17:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:01.148 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:01.406 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:01.406 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:01.406 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:01.406 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:01.406 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:01.406 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:01.406 09:17:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:01.406 09:17:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:01.669 09:17:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:01.669 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:01.669 09:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:02.244 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:02.245 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:02.245 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:02.245 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:02.245 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:02.245 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:02.245 09:17:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:02.245 09:17:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:02.245 09:17:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:02.245 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:02.245 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:02.579 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:02.579 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:02.579 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:02.579 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:02.579 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:02.579 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:02.579 09:17:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:02.579 09:17:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:02.838 09:17:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:02.838 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:02.838 09:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:03.138 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:03.138 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:03.138 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:03.138 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:03.138 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:03.138 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:03.138 09:17:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:03.138 09:17:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:03.138 09:17:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:03.397 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:03.397 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:03.682 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:03.682 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:03.682 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:03.682 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:03.682 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:03.682 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:03.682 09:17:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:03.682 09:17:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:03.682 09:17:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:03.940 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:03.940 09:17:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:04.507 09:17:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:04.507 09:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:04.507 09:17:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:04.507 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:05.073 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:05.073 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:05.073 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:05.073 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:05.073 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:05.073 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:05.073 09:17:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:05.073 09:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:05.073 09:17:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:05.073 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:05.073 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:05.639 09:17:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:05.639 09:17:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:05.639 09:17:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:05.639 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:06.204 09:17:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:06.204 09:17:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:06.204 09:17:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:06.204 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:06.799 09:17:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:06.799 09:17:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:06.799 09:17:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:06.799 09:17:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:07.057 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:07.057 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:07.057 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:07.057 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:07.057 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:07.057 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:07.057 09:17:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:07.057 09:17:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:07.057 09:17:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:07.315 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:07.315 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:07.881 09:17:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:07.881 09:17:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:07.881 09:17:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:07.881 09:17:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:08.447 09:17:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:08.447 09:17:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:08.447 09:17:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:08.447 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:09.012 09:17:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:09.012 09:17:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:09.012 09:17:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:09.012 09:17:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:09.576 09:17:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:09.576 09:17:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:09.576 09:17:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:09.576 09:17:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:10.143 09:17:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:10.143 09:17:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:10.143 09:17:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:10.143 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:10.709 09:17:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:10.709 09:17:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:10.709 [2024-07-12 09:17:45.607861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:51:10.709 [2024-07-12 09:17:45.609367] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:51:10.709 [2024-07-12 09:17:45.609423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:51:10.709 [2024-07-12 09:17:45.609449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:10.709 [2024-07-12 09:17:45.609483] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:51:10.709 [2024-07-12 09:17:45.609503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:51:10.709 [2024-07-12 09:17:45.609521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:10.709 [2024-07-12 09:17:45.609540] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:51:10.709 [2024-07-12 09:17:45.609571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:51:10.709 [2024-07-12 09:17:45.609602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:10.709 [2024-07-12 09:17:45.609626] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:51:10.709 [2024-07-12 09:17:45.609658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:51:10.709 [2024-07-12 09:17:45.609684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:10.709 09:17:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:51:10.709 09:17:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:51:10.968 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:51:10.968 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:51:10.968 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:51:10.968 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:10.968 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:10.968 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:10.968 09:17:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:10.968 09:17:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:11.227 09:17:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:51:11.227 09:17:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:51:17.810 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:51:17.810 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@715 -- # time=109.20 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@716 -- # echo 109.20 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=109.20 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 109.20 1 00:51:17.811 remove_attach_helper took 109.20s to complete (handling 1 nvme drive(s)) 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:51:17.811 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 177572 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 177572 ']' 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 177572 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177572 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:51:17.811 killing process with pid 177572 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177572' 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@967 -- # kill 177572 00:51:17.811 09:17:52 sw_hotplug -- common/autotest_common.sh@972 -- # wait 177572 00:51:19.710 09:17:54 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:51:19.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:51:19.710 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:51:21.087 00:51:21.087 real 3m26.529s 00:51:21.087 user 3m13.081s 00:51:21.087 sys 0m17.088s 00:51:21.087 09:17:55 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:51:21.087 09:17:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:51:21.087 ************************************ 00:51:21.087 END TEST sw_hotplug 00:51:21.087 ************************************ 00:51:21.087 09:17:55 -- common/autotest_common.sh@1142 -- # return 0 00:51:21.087 09:17:55 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:51:21.087 09:17:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:51:21.087 09:17:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:51:21.087 09:17:55 -- common/autotest_common.sh@10 -- # set +x 00:51:21.087 09:17:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:51:21.087 09:17:55 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:51:21.087 09:17:55 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:51:21.087 09:17:55 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:51:21.087 09:17:55 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:51:21.087 09:17:55 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:51:21.087 09:17:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:51:21.087 09:17:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:21.087 09:17:55 -- common/autotest_common.sh@10 -- # set +x 00:51:21.087 ************************************ 00:51:21.087 START TEST blockdev_raid5f 00:51:21.087 ************************************ 00:51:21.087 09:17:55 blockdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:51:21.087 * Looking for test storage... 00:51:21.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@674 -- # uname -s 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@683 -- # crypto_device= 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@684 -- # dek= 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@685 -- # env_ctx= 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=180870 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 180870 00:51:21.087 09:17:56 blockdev_raid5f -- common/autotest_common.sh@829 -- # '[' -z 180870 ']' 00:51:21.087 09:17:56 blockdev_raid5f -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:21.087 09:17:56 blockdev_raid5f -- common/autotest_common.sh@834 -- # local max_retries=100 00:51:21.087 09:17:56 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:51:21.087 09:17:56 blockdev_raid5f -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:21.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:21.087 09:17:56 blockdev_raid5f -- common/autotest_common.sh@838 -- # xtrace_disable 00:51:21.087 09:17:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:21.087 [2024-07-12 09:17:56.089676] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:51:21.088 [2024-07-12 09:17:56.090117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180870 ] 00:51:21.088 [2024-07-12 09:17:56.264364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:21.346 [2024-07-12 09:17:56.507479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@862 -- # return 0 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@280 -- # rpc_cmd 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:22.280 Malloc0 00:51:22.280 Malloc1 00:51:22.280 Malloc2 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@740 -- # cat 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:51:22.280 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:22.280 09:17:57 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.539 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:51:22.539 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5e2326f7-2f8b-4bdf-a0e1-c595f9595d95"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5e2326f7-2f8b-4bdf-a0e1-c595f9595d95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5e2326f7-2f8b-4bdf-a0e1-c595f9595d95",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "deae1ff5-847f-4606-ab2a-c0af14a690df",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "047add75-665d-4d6a-92ff-d6cdb32513aa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b875bab4-bb6e-4d16-b653-88b469e1a93e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:51:22.539 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r .name 00:51:22.539 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:51:22.539 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:51:22.539 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:51:22.539 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@754 -- # killprocess 180870 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@948 -- # '[' -z 180870 ']' 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@952 -- # kill -0 180870 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@953 -- # uname 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180870 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:51:22.539 killing process with pid 180870 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180870' 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@967 -- # kill 180870 00:51:22.539 09:17:57 blockdev_raid5f -- common/autotest_common.sh@972 -- # wait 180870 00:51:25.066 09:17:59 blockdev_raid5f -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:51:25.066 09:17:59 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:51:25.066 09:17:59 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:51:25.066 09:17:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:25.066 09:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:25.066 ************************************ 00:51:25.066 START TEST bdev_hello_world 00:51:25.066 ************************************ 00:51:25.066 09:17:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:51:25.066 [2024-07-12 09:17:59.941753] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:51:25.066 [2024-07-12 09:17:59.942183] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180938 ] 00:51:25.066 [2024-07-12 09:18:00.113914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:25.324 [2024-07-12 09:18:00.330125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:51:25.892 [2024-07-12 09:18:00.833322] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:51:25.892 [2024-07-12 09:18:00.833449] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:51:25.892 [2024-07-12 09:18:00.833518] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:51:25.892 [2024-07-12 09:18:00.834194] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:51:25.892 [2024-07-12 09:18:00.834401] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:51:25.892 [2024-07-12 09:18:00.834449] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:51:25.892 [2024-07-12 09:18:00.834602] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:51:25.892 00:51:25.892 [2024-07-12 09:18:00.834682] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:51:27.264 00:51:27.264 real 0m2.268s 00:51:27.264 user 0m1.856s 00:51:27.264 sys 0m0.297s 00:51:27.264 09:18:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:51:27.264 ************************************ 00:51:27.264 END TEST bdev_hello_world 00:51:27.264 09:18:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:51:27.264 ************************************ 00:51:27.264 09:18:02 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:51:27.264 09:18:02 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:51:27.264 09:18:02 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:51:27.264 09:18:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:27.264 09:18:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:27.264 ************************************ 00:51:27.265 START TEST bdev_bounds 00:51:27.265 ************************************ 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=181005 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:51:27.265 Process bdevio pid: 181005 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 181005' 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 181005 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 181005 ']' 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:27.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:51:27.265 09:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:51:27.265 [2024-07-12 09:18:02.264133] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:51:27.265 [2024-07-12 09:18:02.264394] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181005 ] 00:51:27.265 [2024-07-12 09:18:02.443878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:51:27.523 [2024-07-12 09:18:02.658445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:51:27.523 [2024-07-12 09:18:02.658600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:51:27.523 [2024-07-12 09:18:02.658595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:51:28.088 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:51:28.088 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:51:28.088 09:18:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:51:28.345 I/O targets: 00:51:28.345 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:51:28.345 00:51:28.345 00:51:28.345 CUnit - A unit testing framework for C - Version 2.1-3 00:51:28.345 http://cunit.sourceforge.net/ 00:51:28.345 00:51:28.345 00:51:28.345 Suite: bdevio tests on: raid5f 00:51:28.345 Test: blockdev write read block ...passed 00:51:28.345 Test: blockdev write zeroes read block ...passed 00:51:28.345 Test: blockdev write zeroes read no split ...passed 00:51:28.345 Test: blockdev write zeroes read split ...passed 00:51:28.345 Test: blockdev write zeroes read split partial ...passed 00:51:28.345 Test: blockdev reset ...passed 00:51:28.345 Test: blockdev write read 8 blocks ...passed 00:51:28.346 Test: blockdev write read size > 128k ...passed 00:51:28.346 Test: blockdev write read invalid size ...passed 00:51:28.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:51:28.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:51:28.346 Test: blockdev write read max offset ...passed 00:51:28.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:51:28.346 Test: blockdev writev readv 8 blocks ...passed 00:51:28.346 Test: blockdev writev readv 30 x 1block ...passed 00:51:28.346 Test: blockdev writev readv block ...passed 00:51:28.346 Test: blockdev writev readv size > 128k ...passed 00:51:28.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:51:28.346 Test: blockdev comparev and writev ...passed 00:51:28.346 Test: blockdev nvme passthru rw ...passed 00:51:28.346 Test: blockdev nvme passthru vendor specific ...passed 00:51:28.346 Test: blockdev nvme admin passthru ...passed 00:51:28.346 Test: blockdev copy ...passed 00:51:28.346 00:51:28.346 Run Summary: Type Total Ran Passed Failed Inactive 00:51:28.346 suites 1 1 n/a 0 0 00:51:28.346 tests 23 23 23 0 0 00:51:28.346 asserts 130 130 130 0 n/a 00:51:28.346 00:51:28.346 Elapsed time = 0.499 seconds 00:51:28.346 0 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 181005 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 181005 ']' 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 181005 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 181005 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:51:28.346 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 181005' 00:51:28.604 killing process with pid 181005 00:51:28.604 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@967 -- # kill 181005 00:51:28.604 09:18:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # wait 181005 00:51:29.976 09:18:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:51:29.976 00:51:29.976 real 0m2.696s 00:51:29.976 user 0m6.310s 00:51:29.976 sys 0m0.390s 00:51:29.976 09:18:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:51:29.976 09:18:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:51:29.976 ************************************ 00:51:29.976 END TEST bdev_bounds 00:51:29.976 ************************************ 00:51:29.976 09:18:04 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:51:29.976 09:18:04 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:51:29.976 09:18:04 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:51:29.976 09:18:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:29.976 09:18:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:29.976 ************************************ 00:51:29.976 START TEST bdev_nbd 00:51:29.976 ************************************ 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:51:29.976 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=181068 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 181068 /var/tmp/spdk-nbd.sock 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 181068 ']' 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:51:29.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:51:29.977 09:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:51:29.977 [2024-07-12 09:18:05.011712] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:51:29.977 [2024-07-12 09:18:05.012100] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:30.282 [2024-07-12 09:18:05.182534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:30.282 [2024-07-12 09:18:05.384795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:51:30.848 09:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:51:31.106 1+0 records in 00:51:31.106 1+0 records out 00:51:31.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563054 s, 7.3 MB/s 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:51:31.106 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:51:31.365 { 00:51:31.365 "nbd_device": "/dev/nbd0", 00:51:31.365 "bdev_name": "raid5f" 00:51:31.365 } 00:51:31.365 ]' 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:51:31.365 { 00:51:31.365 "nbd_device": "/dev/nbd0", 00:51:31.365 "bdev_name": "raid5f" 00:51:31.365 } 00:51:31.365 ]' 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:51:31.365 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:51:31.622 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:51:31.622 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:51:31.622 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:51:31.622 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:51:31.622 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:51:31.623 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:51:31.623 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:31.880 09:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:51:31.880 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:51:31.880 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:51:31.880 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:51:32.138 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:51:32.396 /dev/nbd0 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:51:32.396 1+0 records in 00:51:32.396 1+0 records out 00:51:32.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191004 s, 21.4 MB/s 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:32.396 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:51:32.655 { 00:51:32.655 "nbd_device": "/dev/nbd0", 00:51:32.655 "bdev_name": "raid5f" 00:51:32.655 } 00:51:32.655 ]' 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:51:32.655 { 00:51:32.655 "nbd_device": "/dev/nbd0", 00:51:32.655 "bdev_name": "raid5f" 00:51:32.655 } 00:51:32.655 ]' 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:51:32.655 256+0 records in 00:51:32.655 256+0 records out 00:51:32.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00822155 s, 128 MB/s 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:51:32.655 256+0 records in 00:51:32.655 256+0 records out 00:51:32.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033233 s, 31.6 MB/s 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:51:32.655 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:51:32.914 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:51:32.914 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:51:32.914 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:51:32.914 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:51:32.914 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:51:32.914 09:18:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:51:32.914 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:51:32.914 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:51:32.914 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:51:32.914 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:32.914 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:51:33.172 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:51:33.173 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:51:33.431 malloc_lvol_verify 00:51:33.431 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:51:33.690 9d1cbcb4-b1ec-4a07-90a7-b132871c725d 00:51:33.690 09:18:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:51:33.948 e7a4263d-e011-45f1-9604-5fc22d6a2cf1 00:51:33.948 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:51:34.206 /dev/nbd0 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:51:34.463 mke2fs 1.45.5 (07-Jan-2020) 00:51:34.463 Creating filesystem with 1024 4k blocks and 1024 inodes 00:51:34.463 00:51:34.463 Allocating group tables: 0/1 done 00:51:34.463 00:51:34.463 Filesystem too small for a journal 00:51:34.463 Writing inode tables: 0/1 done 00:51:34.463 Writing superblocks and filesystem accounting information: 0/1 done 00:51:34.463 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:51:34.463 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 181068 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 181068 ']' 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 181068 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 181068 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:51:34.720 killing process with pid 181068 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 181068' 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@967 -- # kill 181068 00:51:34.720 09:18:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # wait 181068 00:51:36.092 09:18:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:51:36.092 00:51:36.092 real 0m6.324s 00:51:36.092 user 0m8.955s 00:51:36.092 sys 0m1.141s 00:51:36.092 09:18:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:51:36.092 09:18:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:51:36.092 ************************************ 00:51:36.092 END TEST bdev_nbd 00:51:36.092 ************************************ 00:51:36.350 09:18:11 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:51:36.350 09:18:11 blockdev_raid5f -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:51:36.350 09:18:11 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:51:36.350 09:18:11 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:51:36.350 09:18:11 blockdev_raid5f -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:51:36.350 09:18:11 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:51:36.350 09:18:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:36.350 09:18:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:36.350 ************************************ 00:51:36.350 START TEST bdev_fio 00:51:36.350 ************************************ 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:51:36.350 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:51:36.350 ************************************ 00:51:36.350 START TEST bdev_fio_rw_verify 00:51:36.350 ************************************ 00:51:36.350 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:51:36.351 09:18:11 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:51:36.607 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:51:36.608 fio-3.35 00:51:36.608 Starting 1 thread 00:51:48.802 00:51:48.802 job_raid5f: (groupid=0, jobs=1): err= 0: pid=181327: Fri Jul 12 09:18:22 2024 00:51:48.802 read: IOPS=8977, BW=35.1MiB/s (36.8MB/s)(351MiB/10001msec) 00:51:48.802 slat (usec): min=23, max=537, avg=27.32, stdev= 5.23 00:51:48.802 clat (usec): min=13, max=856, avg=179.34, stdev=65.68 00:51:48.802 lat (usec): min=40, max=976, avg=206.65, stdev=66.40 00:51:48.802 clat percentiles (usec): 00:51:48.802 | 50.000th=[ 184], 99.000th=[ 306], 99.900th=[ 383], 99.990th=[ 676], 00:51:48.802 | 99.999th=[ 857] 00:51:48.802 write: IOPS=9460, BW=37.0MiB/s (38.8MB/s)(365MiB/9877msec); 0 zone resets 00:51:48.802 slat (usec): min=10, max=547, avg=22.51, stdev= 5.26 00:51:48.802 clat (usec): min=77, max=1146, avg=403.18, stdev=52.97 00:51:48.802 lat (usec): min=99, max=1323, avg=425.69, stdev=54.04 00:51:48.802 clat percentiles (usec): 00:51:48.802 | 50.000th=[ 408], 99.000th=[ 506], 99.900th=[ 709], 99.990th=[ 1029], 00:51:48.802 | 99.999th=[ 1139] 00:51:48.802 bw ( KiB/s): min=33768, max=41564, per=98.90%, avg=37427.58, stdev=2254.26, samples=19 00:51:48.802 iops : min= 8442, max=10391, avg=9356.89, stdev=563.57, samples=19 00:51:48.802 lat (usec) : 20=0.01%, 50=0.01%, 100=5.99%, 250=34.44%, 500=58.83% 00:51:48.802 lat (usec) : 750=0.69%, 1000=0.04% 00:51:48.802 lat (msec) : 2=0.01% 00:51:48.802 cpu : usr=99.16%, sys=0.64%, ctx=837, majf=0, minf=6415 00:51:48.802 IO depths : 1=7.6%, 2=19.6%, 4=55.4%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:51:48.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:48.802 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:51:48.802 issued rwts: total=89782,93446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:51:48.802 latency : target=0, window=0, percentile=100.00%, depth=8 00:51:48.802 00:51:48.802 Run status group 0 (all jobs): 00:51:48.802 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=351MiB (368MB), run=10001-10001msec 00:51:48.802 WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=365MiB (383MB), run=9877-9877msec 00:51:49.060 ----------------------------------------------------- 00:51:49.060 Suppressions used: 00:51:49.060 count bytes template 00:51:49.060 1 7 /usr/src/fio/parse.c 00:51:49.060 862 82752 /usr/src/fio/iolog.c 00:51:49.060 2 596 libcrypto.so 00:51:49.060 ----------------------------------------------------- 00:51:49.060 00:51:49.060 00:51:49.060 real 0m12.764s 00:51:49.060 user 0m13.579s 00:51:49.060 sys 0m0.631s 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:51:49.060 ************************************ 00:51:49.060 END TEST bdev_fio_rw_verify 00:51:49.060 ************************************ 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5e2326f7-2f8b-4bdf-a0e1-c595f9595d95"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5e2326f7-2f8b-4bdf-a0e1-c595f9595d95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5e2326f7-2f8b-4bdf-a0e1-c595f9595d95",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "deae1ff5-847f-4606-ab2a-c0af14a690df",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "047add75-665d-4d6a-92ff-d6cdb32513aa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b875bab4-bb6e-4d16-b653-88b469e1a93e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:51:49.060 /home/vagrant/spdk_repo/spdk 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:51:49.060 00:51:49.060 real 0m12.927s 00:51:49.060 user 0m13.699s 00:51:49.060 sys 0m0.673s 00:51:49.060 ************************************ 00:51:49.060 END TEST bdev_fio 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:51:49.060 09:18:24 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:51:49.060 ************************************ 00:51:49.318 09:18:24 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:51:49.318 09:18:24 blockdev_raid5f -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:51:49.318 09:18:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:51:49.318 09:18:24 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:51:49.318 09:18:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:49.318 09:18:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:49.318 ************************************ 00:51:49.318 START TEST bdev_verify 00:51:49.318 ************************************ 00:51:49.318 09:18:24 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:51:49.318 [2024-07-12 09:18:24.368695] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:51:49.318 [2024-07-12 09:18:24.368950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181506 ] 00:51:49.576 [2024-07-12 09:18:24.554850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:51:49.834 [2024-07-12 09:18:24.835520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:51:49.834 [2024-07-12 09:18:24.848339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:51:50.400 Running I/O for 5 seconds... 00:51:55.667 00:51:55.667 Latency(us) 00:51:55.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:55.667 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:51:55.667 Verification LBA range: start 0x0 length 0x2000 00:51:55.667 raid5f : 5.01 7078.98 27.65 0.00 0.00 27241.88 264.38 26571.87 00:51:55.667 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:51:55.667 Verification LBA range: start 0x2000 length 0x2000 00:51:55.667 raid5f : 5.02 7119.64 27.81 0.00 0.00 26951.05 236.45 26452.71 00:51:55.667 =================================================================================================================== 00:51:55.667 Total : 14198.62 55.46 0.00 0.00 27096.01 236.45 26571.87 00:51:57.040 00:51:57.040 real 0m7.778s 00:51:57.040 user 0m14.057s 00:51:57.040 sys 0m0.364s 00:51:57.040 09:18:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:51:57.040 09:18:32 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:51:57.040 ************************************ 00:51:57.040 END TEST bdev_verify 00:51:57.040 ************************************ 00:51:57.040 09:18:32 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:51:57.040 09:18:32 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:51:57.040 09:18:32 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:51:57.040 09:18:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:51:57.040 09:18:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:51:57.040 ************************************ 00:51:57.040 START TEST bdev_verify_big_io 00:51:57.040 ************************************ 00:51:57.040 09:18:32 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:51:57.040 [2024-07-12 09:18:32.213756] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:51:57.040 [2024-07-12 09:18:32.214073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181628 ] 00:51:57.298 [2024-07-12 09:18:32.390648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:51:57.556 [2024-07-12 09:18:32.625444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:51:57.556 [2024-07-12 09:18:32.625445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:51:58.121 Running I/O for 5 seconds... 00:52:04.675 00:52:04.675 Latency(us) 00:52:04.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:04.675 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:52:04.675 Verification LBA range: start 0x0 length 0x200 00:52:04.675 raid5f : 5.45 303.07 18.94 0.00 0.00 10639484.25 554.82 440401.92 00:52:04.675 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:52:04.675 Verification LBA range: start 0x200 length 0x200 00:52:04.675 raid5f : 5.45 302.90 18.93 0.00 0.00 10729316.32 194.56 444214.92 00:52:04.675 =================================================================================================================== 00:52:04.675 Total : 605.97 37.87 0.00 0.00 10684400.29 194.56 444214.92 00:52:04.934 00:52:04.934 real 0m7.952s 00:52:04.934 user 0m14.524s 00:52:04.934 sys 0m0.341s 00:52:04.934 09:18:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:52:04.934 ************************************ 00:52:04.934 09:18:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:52:04.934 END TEST bdev_verify_big_io 00:52:04.934 ************************************ 00:52:05.192 09:18:40 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:52:05.192 09:18:40 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:52:05.192 09:18:40 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:52:05.192 09:18:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:52:05.192 09:18:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:52:05.192 ************************************ 00:52:05.192 START TEST bdev_write_zeroes 00:52:05.192 ************************************ 00:52:05.192 09:18:40 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:52:05.192 [2024-07-12 09:18:40.212142] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:52:05.192 [2024-07-12 09:18:40.212404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181731 ] 00:52:05.451 [2024-07-12 09:18:40.387958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:05.451 [2024-07-12 09:18:40.611565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:52:06.018 Running I/O for 1 seconds... 00:52:07.394 00:52:07.394 Latency(us) 00:52:07.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:07.394 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:52:07.394 raid5f : 1.01 20340.07 79.45 0.00 0.00 6268.57 1861.82 7268.54 00:52:07.394 =================================================================================================================== 00:52:07.394 Total : 20340.07 79.45 0.00 0.00 6268.57 1861.82 7268.54 00:52:08.765 00:52:08.765 real 0m3.461s 00:52:08.765 user 0m3.076s 00:52:08.765 sys 0m0.273s 00:52:08.765 09:18:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:52:08.765 09:18:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:52:08.765 ************************************ 00:52:08.765 END TEST bdev_write_zeroes 00:52:08.765 ************************************ 00:52:08.765 09:18:43 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:52:08.765 09:18:43 blockdev_raid5f -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:52:08.765 09:18:43 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:52:08.765 09:18:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:52:08.765 09:18:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:52:08.765 ************************************ 00:52:08.765 START TEST bdev_json_nonenclosed 00:52:08.765 ************************************ 00:52:08.765 09:18:43 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:52:08.765 [2024-07-12 09:18:43.719498] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:52:08.765 [2024-07-12 09:18:43.719691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181811 ] 00:52:08.765 [2024-07-12 09:18:43.881552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:09.023 [2024-07-12 09:18:44.104203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:52:09.023 [2024-07-12 09:18:44.104348] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:52:09.023 [2024-07-12 09:18:44.104412] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:52:09.023 [2024-07-12 09:18:44.104467] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:52:09.595 00:52:09.595 real 0m0.858s 00:52:09.595 user 0m0.609s 00:52:09.595 sys 0m0.149s 00:52:09.595 09:18:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:52:09.595 09:18:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:52:09.595 ************************************ 00:52:09.595 END TEST bdev_json_nonenclosed 00:52:09.595 09:18:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:52:09.595 ************************************ 00:52:09.595 09:18:44 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:52:09.595 09:18:44 blockdev_raid5f -- bdev/blockdev.sh@782 -- # true 00:52:09.595 09:18:44 blockdev_raid5f -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:52:09.595 09:18:44 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:52:09.595 09:18:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:52:09.595 09:18:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:52:09.595 ************************************ 00:52:09.595 START TEST bdev_json_nonarray 00:52:09.595 ************************************ 00:52:09.595 09:18:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:52:09.595 [2024-07-12 09:18:44.639098] Starting SPDK v24.09-pre git sha1 b3936a144 / DPDK 24.03.0 initialization... 00:52:09.595 [2024-07-12 09:18:44.639558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181844 ] 00:52:09.853 [2024-07-12 09:18:44.807059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:10.111 [2024-07-12 09:18:45.057509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:52:10.111 [2024-07-12 09:18:45.057696] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:52:10.111 [2024-07-12 09:18:45.057774] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:52:10.111 [2024-07-12 09:18:45.057811] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:52:10.369 00:52:10.369 real 0m0.898s 00:52:10.369 user 0m0.641s 00:52:10.369 sys 0m0.157s 00:52:10.369 09:18:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:52:10.369 09:18:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:52:10.369 ************************************ 00:52:10.369 END TEST bdev_json_nonarray 00:52:10.369 ************************************ 00:52:10.369 09:18:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:52:10.369 09:18:45 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@785 -- # true 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@811 -- # cleanup 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:52:10.369 09:18:45 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:52:10.369 00:52:10.369 real 0m49.593s 00:52:10.369 user 1m7.929s 00:52:10.369 sys 0m4.497s 00:52:10.369 09:18:45 blockdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:52:10.369 09:18:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:52:10.369 ************************************ 00:52:10.369 END TEST blockdev_raid5f 00:52:10.369 ************************************ 00:52:10.627 09:18:45 -- common/autotest_common.sh@1142 -- # return 0 00:52:10.627 09:18:45 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:52:10.627 09:18:45 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:52:10.627 09:18:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:52:10.627 09:18:45 -- common/autotest_common.sh@10 -- # set +x 00:52:10.627 09:18:45 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:52:10.627 09:18:45 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:52:10.627 09:18:45 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:52:10.627 09:18:45 -- common/autotest_common.sh@10 -- # set +x 00:52:12.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:52:12.002 Waiting for block devices as requested 00:52:12.002 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:52:12.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:52:12.569 Cleaning 00:52:12.569 Removing: /var/run/dpdk/spdk0/config 00:52:12.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:52:12.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:52:12.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:52:12.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:52:12.569 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:52:12.569 Removing: /var/run/dpdk/spdk0/hugepage_info 00:52:12.569 Removing: /dev/shm/spdk_tgt_trace.pid111142 00:52:12.569 Removing: /var/run/dpdk/spdk0 00:52:12.569 Removing: /var/run/dpdk/spdk_pid110881 00:52:12.569 Removing: /var/run/dpdk/spdk_pid111142 00:52:12.569 Removing: /var/run/dpdk/spdk_pid111381 00:52:12.569 Removing: /var/run/dpdk/spdk_pid111523 00:52:12.569 Removing: /var/run/dpdk/spdk_pid111594 00:52:12.569 Removing: /var/run/dpdk/spdk_pid111735 00:52:12.569 Removing: /var/run/dpdk/spdk_pid111765 00:52:12.569 Removing: /var/run/dpdk/spdk_pid111941 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112220 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112397 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112506 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112633 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112756 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112870 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112930 00:52:12.569 Removing: /var/run/dpdk/spdk_pid112975 00:52:12.569 Removing: /var/run/dpdk/spdk_pid113046 00:52:12.569 Removing: /var/run/dpdk/spdk_pid113187 00:52:12.569 Removing: /var/run/dpdk/spdk_pid113768 00:52:12.569 Removing: /var/run/dpdk/spdk_pid113845 00:52:12.569 Removing: /var/run/dpdk/spdk_pid113924 00:52:12.569 Removing: /var/run/dpdk/spdk_pid113945 00:52:12.569 Removing: /var/run/dpdk/spdk_pid114112 00:52:12.569 Removing: /var/run/dpdk/spdk_pid114139 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114314 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114350 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114419 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114449 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114518 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114541 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114761 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114805 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114869 00:52:12.570 Removing: /var/run/dpdk/spdk_pid114962 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115043 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115095 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115203 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115266 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115324 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115381 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115457 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115520 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115578 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115635 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115708 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115771 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115820 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115899 00:52:12.570 Removing: /var/run/dpdk/spdk_pid115959 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116018 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116074 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116152 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116203 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116264 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116323 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116405 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116468 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116560 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116720 00:52:12.570 Removing: /var/run/dpdk/spdk_pid116903 00:52:12.570 Removing: /var/run/dpdk/spdk_pid117018 00:52:12.570 Removing: /var/run/dpdk/spdk_pid117080 00:52:12.570 Removing: /var/run/dpdk/spdk_pid118408 00:52:12.570 Removing: /var/run/dpdk/spdk_pid118655 00:52:12.570 Removing: /var/run/dpdk/spdk_pid118881 00:52:12.570 Removing: /var/run/dpdk/spdk_pid119031 00:52:12.570 Removing: /var/run/dpdk/spdk_pid119195 00:52:12.570 Removing: /var/run/dpdk/spdk_pid119276 00:52:12.829 Removing: /var/run/dpdk/spdk_pid119334 00:52:12.829 Removing: /var/run/dpdk/spdk_pid119372 00:52:12.829 Removing: /var/run/dpdk/spdk_pid119897 00:52:12.829 Removing: /var/run/dpdk/spdk_pid119991 00:52:12.829 Removing: /var/run/dpdk/spdk_pid120128 00:52:12.829 Removing: /var/run/dpdk/spdk_pid120194 00:52:12.829 Removing: /var/run/dpdk/spdk_pid121631 00:52:12.829 Removing: /var/run/dpdk/spdk_pid122039 00:52:12.829 Removing: /var/run/dpdk/spdk_pid122262 00:52:12.829 Removing: /var/run/dpdk/spdk_pid123296 00:52:12.829 Removing: /var/run/dpdk/spdk_pid123697 00:52:12.829 Removing: /var/run/dpdk/spdk_pid123908 00:52:12.829 Removing: /var/run/dpdk/spdk_pid124941 00:52:12.829 Removing: /var/run/dpdk/spdk_pid125535 00:52:12.829 Removing: /var/run/dpdk/spdk_pid125753 00:52:12.829 Removing: /var/run/dpdk/spdk_pid128155 00:52:12.829 Removing: /var/run/dpdk/spdk_pid128679 00:52:12.829 Removing: /var/run/dpdk/spdk_pid128908 00:52:12.829 Removing: /var/run/dpdk/spdk_pid131301 00:52:12.829 Removing: /var/run/dpdk/spdk_pid131829 00:52:12.829 Removing: /var/run/dpdk/spdk_pid132045 00:52:12.829 Removing: /var/run/dpdk/spdk_pid134406 00:52:12.829 Removing: /var/run/dpdk/spdk_pid135184 00:52:12.829 Removing: /var/run/dpdk/spdk_pid135410 00:52:12.829 Removing: /var/run/dpdk/spdk_pid137972 00:52:12.829 Removing: /var/run/dpdk/spdk_pid138540 00:52:12.829 Removing: /var/run/dpdk/spdk_pid138776 00:52:12.829 Removing: /var/run/dpdk/spdk_pid141388 00:52:12.829 Removing: /var/run/dpdk/spdk_pid141996 00:52:12.829 Removing: /var/run/dpdk/spdk_pid142228 00:52:12.829 Removing: /var/run/dpdk/spdk_pid144822 00:52:12.829 Removing: /var/run/dpdk/spdk_pid145750 00:52:12.829 Removing: /var/run/dpdk/spdk_pid145978 00:52:12.829 Removing: /var/run/dpdk/spdk_pid146214 00:52:12.829 Removing: /var/run/dpdk/spdk_pid146818 00:52:12.829 Removing: /var/run/dpdk/spdk_pid147851 00:52:12.829 Removing: /var/run/dpdk/spdk_pid148376 00:52:12.829 Removing: /var/run/dpdk/spdk_pid149317 00:52:12.829 Removing: /var/run/dpdk/spdk_pid149941 00:52:12.829 Removing: /var/run/dpdk/spdk_pid150985 00:52:12.829 Removing: /var/run/dpdk/spdk_pid151558 00:52:12.829 Removing: /var/run/dpdk/spdk_pid154616 00:52:12.829 Removing: /var/run/dpdk/spdk_pid155379 00:52:12.829 Removing: /var/run/dpdk/spdk_pid155974 00:52:12.829 Removing: /var/run/dpdk/spdk_pid159278 00:52:12.829 Removing: /var/run/dpdk/spdk_pid160159 00:52:12.829 Removing: /var/run/dpdk/spdk_pid160843 00:52:12.829 Removing: /var/run/dpdk/spdk_pid162305 00:52:12.829 Removing: /var/run/dpdk/spdk_pid162871 00:52:12.829 Removing: /var/run/dpdk/spdk_pid164234 00:52:12.829 Removing: /var/run/dpdk/spdk_pid164803 00:52:12.829 Removing: /var/run/dpdk/spdk_pid166190 00:52:12.829 Removing: /var/run/dpdk/spdk_pid166767 00:52:12.829 Removing: /var/run/dpdk/spdk_pid167674 00:52:12.829 Removing: /var/run/dpdk/spdk_pid167755 00:52:12.829 Removing: /var/run/dpdk/spdk_pid167805 00:52:12.829 Removing: /var/run/dpdk/spdk_pid167882 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168019 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168192 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168413 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168724 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168748 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168800 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168830 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168875 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168913 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168941 00:52:12.829 Removing: /var/run/dpdk/spdk_pid168969 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169008 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169055 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169087 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169115 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169142 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169174 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169229 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169256 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169288 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169316 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169364 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169392 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169440 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169471 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169517 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169615 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169663 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169691 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169742 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169768 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169810 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169874 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169901 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169944 00:52:12.829 Removing: /var/run/dpdk/spdk_pid169973 00:52:12.829 Removing: /var/run/dpdk/spdk_pid170017 00:52:12.829 Removing: /var/run/dpdk/spdk_pid170042 00:52:12.829 Removing: /var/run/dpdk/spdk_pid170066 00:52:12.829 Removing: /var/run/dpdk/spdk_pid170091 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170114 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170154 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170200 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170248 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170276 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170321 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170369 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170391 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170455 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170482 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170525 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170571 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170599 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170624 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170648 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170673 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170722 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170747 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170838 00:52:13.087 Removing: /var/run/dpdk/spdk_pid170959 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171132 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171162 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171220 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171307 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171339 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171368 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171401 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171453 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171503 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171593 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171653 00:52:13.087 Removing: /var/run/dpdk/spdk_pid171718 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172004 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172135 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172183 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172273 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172378 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172422 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172702 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172805 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172919 00:52:13.087 Removing: /var/run/dpdk/spdk_pid172999 00:52:13.087 Removing: /var/run/dpdk/spdk_pid173029 00:52:13.087 Removing: /var/run/dpdk/spdk_pid173117 00:52:13.087 Removing: /var/run/dpdk/spdk_pid173670 00:52:13.087 Removing: /var/run/dpdk/spdk_pid173720 00:52:13.087 Removing: /var/run/dpdk/spdk_pid174056 00:52:13.087 Removing: /var/run/dpdk/spdk_pid174152 00:52:13.087 Removing: /var/run/dpdk/spdk_pid174265 00:52:13.087 Removing: /var/run/dpdk/spdk_pid174322 00:52:13.087 Removing: /var/run/dpdk/spdk_pid174378 00:52:13.087 Removing: /var/run/dpdk/spdk_pid174410 00:52:13.087 Removing: /var/run/dpdk/spdk_pid175815 00:52:13.087 Removing: /var/run/dpdk/spdk_pid175957 00:52:13.087 Removing: /var/run/dpdk/spdk_pid175962 00:52:13.087 Removing: /var/run/dpdk/spdk_pid175988 00:52:13.087 Removing: /var/run/dpdk/spdk_pid176505 00:52:13.087 Removing: /var/run/dpdk/spdk_pid176604 00:52:13.087 Removing: /var/run/dpdk/spdk_pid177572 00:52:13.087 Removing: /var/run/dpdk/spdk_pid180870 00:52:13.087 Removing: /var/run/dpdk/spdk_pid180938 00:52:13.087 Removing: /var/run/dpdk/spdk_pid181005 00:52:13.087 Removing: /var/run/dpdk/spdk_pid181308 00:52:13.087 Removing: /var/run/dpdk/spdk_pid181506 00:52:13.087 Removing: /var/run/dpdk/spdk_pid181628 00:52:13.087 Removing: /var/run/dpdk/spdk_pid181731 00:52:13.087 Removing: /var/run/dpdk/spdk_pid181811 00:52:13.087 Removing: /var/run/dpdk/spdk_pid181844 00:52:13.087 Clean 00:52:13.345 09:18:48 -- common/autotest_common.sh@1451 -- # return 0 00:52:13.345 09:18:48 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:52:13.345 09:18:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:52:13.345 09:18:48 -- common/autotest_common.sh@10 -- # set +x 00:52:13.345 09:18:48 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:52:13.345 09:18:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:52:13.345 09:18:48 -- common/autotest_common.sh@10 -- # set +x 00:52:13.345 09:18:48 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:52:13.345 09:18:48 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:52:13.345 09:18:48 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:52:13.345 09:18:48 -- spdk/autotest.sh@391 -- # hash lcov 00:52:13.345 09:18:48 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:52:13.345 09:18:48 -- spdk/autotest.sh@393 -- # hostname 00:52:13.345 09:18:48 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:52:13.603 geninfo: WARNING: invalid characters removed from testname! 00:53:09.879 09:19:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:12.406 09:19:46 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:15.685 09:19:50 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:19.032 09:19:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:23.220 09:19:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:26.506 09:20:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:29.796 09:20:04 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:53:29.796 09:20:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:29.796 09:20:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:53:29.796 09:20:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:29.796 09:20:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:29.796 09:20:04 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:53:29.796 09:20:04 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:53:29.796 09:20:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:53:29.796 09:20:04 -- paths/export.sh@5 -- $ export PATH 00:53:29.796 09:20:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:53:29.796 09:20:04 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:53:29.796 09:20:04 -- common/autobuild_common.sh@444 -- $ date +%s 00:53:29.796 09:20:04 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720776004.XXXXXX 00:53:29.796 09:20:04 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720776004.XWMT7K 00:53:29.796 09:20:04 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:53:29.796 09:20:04 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:53:29.796 09:20:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:53:29.796 09:20:04 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:53:29.796 09:20:04 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:53:29.796 09:20:04 -- common/autobuild_common.sh@460 -- $ get_config_params 00:53:29.796 09:20:04 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:53:29.796 09:20:04 -- common/autotest_common.sh@10 -- $ set +x 00:53:29.796 09:20:04 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:53:29.796 09:20:04 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:53:29.796 09:20:04 -- pm/common@17 -- $ local monitor 00:53:29.796 09:20:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:29.796 09:20:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:29.796 09:20:04 -- pm/common@25 -- $ sleep 1 00:53:29.796 09:20:04 -- pm/common@21 -- $ date +%s 00:53:29.796 09:20:04 -- pm/common@21 -- $ date +%s 00:53:29.796 09:20:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720776004 00:53:29.796 09:20:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720776004 00:53:30.053 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720776004_collect-vmstat.pm.log 00:53:30.053 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720776004_collect-cpu-load.pm.log 00:53:30.986 09:20:05 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:53:30.986 09:20:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:53:30.986 09:20:05 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:53:30.986 09:20:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:53:30.986 09:20:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:53:30.986 09:20:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:53:30.986 09:20:05 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:53:30.986 09:20:05 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:53:30.986 09:20:05 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:30.986 09:20:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:53:30.986 09:20:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:53:30.986 09:20:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:53:30.986 09:20:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:53:30.986 09:20:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:30.986 09:20:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:53:30.986 09:20:06 -- pm/common@44 -- $ pid=183489 00:53:30.986 09:20:06 -- pm/common@50 -- $ kill -TERM 183489 00:53:30.986 09:20:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:53:30.986 09:20:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:53:30.986 09:20:06 -- pm/common@44 -- $ pid=183490 00:53:30.986 09:20:06 -- pm/common@50 -- $ kill -TERM 183490 00:53:30.986 + [[ -n 2411 ]] 00:53:30.986 + sudo kill 2411 00:53:30.986 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:53:31.929 [Pipeline] } 00:53:31.952 [Pipeline] // timeout 00:53:31.958 [Pipeline] } 00:53:31.980 [Pipeline] // stage 00:53:31.986 [Pipeline] } 00:53:32.007 [Pipeline] // catchError 00:53:32.018 [Pipeline] stage 00:53:32.020 [Pipeline] { (Stop VM) 00:53:32.038 [Pipeline] sh 00:53:32.321 + vagrant halt 00:53:35.603 ==> default: Halting domain... 00:53:45.590 [Pipeline] sh 00:53:45.869 + vagrant destroy -f 00:53:49.186 ==> default: Removing domain... 00:53:50.132 [Pipeline] sh 00:53:50.410 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest_2/output 00:53:50.419 [Pipeline] } 00:53:50.437 [Pipeline] // stage 00:53:50.443 [Pipeline] } 00:53:50.460 [Pipeline] // dir 00:53:50.465 [Pipeline] } 00:53:50.482 [Pipeline] // wrap 00:53:50.488 [Pipeline] } 00:53:50.503 [Pipeline] // catchError 00:53:50.512 [Pipeline] stage 00:53:50.514 [Pipeline] { (Epilogue) 00:53:50.528 [Pipeline] sh 00:53:50.808 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:54:12.778 [Pipeline] catchError 00:54:12.780 [Pipeline] { 00:54:12.794 [Pipeline] sh 00:54:13.070 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:54:13.070 Artifacts sizes are good 00:54:13.078 [Pipeline] } 00:54:13.091 [Pipeline] // catchError 00:54:13.099 [Pipeline] archiveArtifacts 00:54:13.105 Archiving artifacts 00:54:13.509 [Pipeline] cleanWs 00:54:13.519 [WS-CLEANUP] Deleting project workspace... 00:54:13.519 [WS-CLEANUP] Deferred wipeout is used... 00:54:13.524 [WS-CLEANUP] done 00:54:13.527 [Pipeline] } 00:54:13.545 [Pipeline] // stage 00:54:13.550 [Pipeline] } 00:54:13.563 [Pipeline] // node 00:54:13.567 [Pipeline] End of Pipeline 00:54:13.696 Finished: SUCCESS